SELECTED E-METRICS

Summary of Responses

Source

/

Measure

/

Collect

now /

Could

collect /

Should collect

(1-4 scale) /

Average score

1 Patron Accessible Electronic Resources

A / 1.1 Number of electronic full-text journals (R1) / 14 / 1 / 11=12 / 1.1
A / 1.2 Number of electronic reference sources (R2) / 5 / 9 / 12=22 / 1.8
A / 1.3 Number of electronic books (R3) / 6 / 8 / 13=21 / 1.6

2 Use of Networked Resources and Services

A / 2.1 Number of electronic reference transactions (U1) / 9 / 4 / 12=17 / 1.4
A,E,C / 2.2 No. of logins(sessions) on electronic databases (U2) / 14 / 12=19 / 1.6
A,C / 2.3 Number of queries (searches) in electronic databases (U3) / 13 / 2 / 12=20 / 1.7
A,C,E / 2.4 Items requested in electronic databases (U4) / 12 / 3 / 14=24 / 1.7
C / 2.5 Full-text requests from electronic databases / 12 / 2 / 13=18 / 1.4
E / 2.6 Number of remote sessions on electronic services / 7 / 2 / 14=26 / 1.9
A / 2.7 Virtual visits to library’s website and catalog (U5) / 8 / 4 / 13=23 / 1.8

NZ

/

2.8 Web page hits

/

9

/

3

/

12=17

/

1.4

3 Expenditures for Networked Resources and Related Infrastructure

A / 3.1 Cost of electronic full-text journals (C1) / 11 / 4 / 14=17 / 1.2
A / 3.2 Cost of electronic reference sources (C2) / 8 / 8 / 12=18 / 1.5
A / 3.3 Cost of electronic books (C3) / 8 / 8 / 13=15 / 1.2
A / 3.4 Library expenditures for bibliographic utilities, networks, and consortia (C4) / 8 / 7 / 12=22 / 1.8
E / 3.5 % of total acquisitions $ spent on e-resources / 12 / 2 / 12=14 / 1.2

4 Performance Measures or Ratios

A / 4.1 Percentage of electronic reference transactions of total reference (P1) / 4 / 7 / 14=27 / 1.9

E

/

4.2 % of information requests submitted electronically

/

1

/

8

/

13=33

/ 2.5
A / 4.3 % of electronic books to all monographs (P3) / 1 / 10 / 15=37 / 2.5

E

/

4.4 Database logins (sessions) per FTE

/

6

/

14=34

/

2.4

E / 4.5 No. of library computer workstation hrs/FTE / 1 / 9 / 15=48 / 3.2
2.4
NZ / 4.6 Library computer workstations per EFTS / 3 / 9 / 14=34

A = ARL Data collection manual… C = COUNTER project E = Equinox project

Number of responses to the survey: 16

1.1Largely reliant on the CAUL Deemed List.

-Collected already as part of the CAUL statistics

-I would like to see a tighter definition of electronic journal. Under the CAUL Deemed List libraries with $2m - $12m budgets all seem to have about 24,000 - 30,000 ejournals which is not an accurate reflection of collection quality. Many titles in aggregator services are partial fulltext, or text only, or poor quality graphics, or of questionable content that we would not purchase individually. It gives a false impression that better funding does not provide a higher quality Library collection.

1.2- Unable to answer as ‘reference sources’ not defined

-Collect citation indexes & collection services, would need some work to collect other categories of materials

-Not convinced that reference needs to be separated in the digital environment.

-Collect stats on some, not all of the reference categories listed

-Time-consuming to calculate; excluding free resources is distorting

1.3- May need to change our criteria to exclude free Gutenberg titles? Why are machine-readable books distb on CDROM excluded? Because of lack of search facility or reading software?

-Small number as yet

-USC has not acquired resources in this format to date, however, this information can be collected and reported using the library system/acquisitions module.

2.1- Would require a definition of ‘reference transactions’

-Disparate collection sources but possible.

-Useful as a measure of a particular function/service only.

-Very small as yet

-Some inquiries are collected. Improved arrangements to go into place in 2003.

-Sample periods used here.

2.2- Vendor dependent, except for EZProxy remote use

-Where resource provider collects data: only available for some of our resources

-Available for most databases

-Collected where available.

-Depends on information from vendors, so incomplete

-Database vendor statistics vary: some provide good data, some do not, and not all of it is comparable

-Core databases only (where provided by publisher)

-Currently no consistent way of doing this for all databases

2.3- Where resource provider collects data: only available for some of our resources

-available for most databases

-Collected where available.

-Depends on information from vendors, so incomplete

-Database vendor statistics vary: some provide good data, some do not, and not all of it is comparable

-Core databases only (where provided by publisher)

2.4 -Would require some compilation; Figure could be misleading eg where a user finds a citation, then accesses the abstract and then prints out the PDF, is that 1 access or 3? Where resource provider collects data: only available for some of our resources

-Collected where available.

-Depends on information from vendors, so incomplete

-Database vendor statistics vary: some provide good data, some do not, and not all of it is comparable

-Core databases only (where provided by publisher)

-This is the preferred method of counting database use

-Does not really equate to usage. A client unable to find any abstracts of value to their topic will usually read many more abstracts than the client who finds something useful straight away. Thus high use could equate to dissatisfaction.

2.5- Where resource provider collects data: only available for some of our resources

-Collected where available.

-Depends on information from vendors, so incomplete

-Database vendor statistics vary: some provide good data, some do not, and not all of it is comparable

-Core databases only (where provided by publisher) – subset of preceding measure

-I consider this to be the best indicator of value for money, especially for databases where the client has access to an abstract before downloading the fulltext.

2.6- Via EZProxy, not vendors

-See U2. We can’t differentiate between ‘local’ and ‘remote’ & it isn’t very meaningful anyway, as remote could mean in another campus lab, at home of oncampus student or at workplace of offcampus student in another country! Nice to have though

-Use of proxies may render stats unreliable

-Currently unable to collect

-We use Ez-proxy so all logins appear as part of the CSU IP range

-This information is collected by the Library using web reporting sortware (Webtrends), using our authentication software (EzProxy)

-We prefer dowlonads to sessions; whether remote or onsite is important. Collected irregularly.

-EzProxy is useful for this. I would prefer the remote access indicator to be expressed as a percentage of all sessions.

2.7Too hard to measure, esp because of proxy server issue, also definition of a ‘visit’

-Use of proxies may render stats unreliable

-Statistical reliability questionable

-Estimate rather than accurate

-This information is collected and reported by ITS, as an overall monitoring of the USC website. This is not maintained separately by the Library.

-It can be difficult to determine unique connection sessions.

2.8- A crude measure. Need a tight definition. Useful for internal trends only.

-Statistical reliability questionable

-Need this as web site structures change and former subcategories of hits are not consistent

-Estimate rather than accurate: doesn’t separate staff visits for maintenance, etc.

-This information is collected and reported by ITS, as an overall monitoring of the USC website. This is not maintained seperately by the Library.

-Very vague – assumed to mean any/all library Web pages

-It is extremely useful for internal web management / client focus processes to analyse/report hits per web site directory / service.

3.1- Hard to separate out e-only, esp for print/electronic combinations

-Yes, for individual titles. However. as most of our titles come via an aggregated database, it is difficult to break down cost against title (although average by database is possible – although not always accurate)

-Average cost e-journal compared to print journal is interesting.

3.2- Collect the cost of all electronic materials but do not have a breakdown for reference sources – this could be done but a definition of reference sources would be required.

-Not convinced in the value of separating reference sources out. Many free resources are highly valuable.

3.3- Have raw data but currently do not collect as such

-USC has not acquired resources in this format to date, however, this information can be collected and reported using the library system/acquisitions module.

-Average cost of e-book to print book is interesting.

3.4- $0 or does Kinetica count? Presume Caul, Quloc subs not counted.

-Some issues of definition – is CAUL included? Kinetica? Staff time?

-Not sure what value this category brings to managing the library

3.5- If this is a compilation of the prev 4 items

-Resource costs are only part of the picture. Also interesting is the cost to operate and manage the print library (lending, shelving etc) compared to the digital library (licensing, linking, interfaces etc).

4.1- Such info useful in the developing stages for planning purposes

-Have only just reinstated reference counts.

-Disparate sources. Collect total information enquiries already, could collect e-inquiries manually

-Manual counts of traditional reference transactions can be unreliable

-Accuracy of counts for traditional reference an issue, as is the definition of “reference”. May be useful to collect for a particular purpose on an irregular basis.

-At present very small but will become meanginful in future.

-We collect for electronic but not for inhouse – so the % cannot be calculated

-Difficult to map all requests in person, service point desks, to email (central V-Reference and Liaison Librarians), or by phone. Sampling is best.

4.3-2.12% as at October 2002 (though our definition is different at present)

-Again virtually negligible now but will be significant in future

-Difficult to map all requests in person, service point desks, to email (central V-Reference and Liaison Librarians), or by phone. Sampling is best.

4.4-IT Division could collect

-Raw data mostly available. Where resource provider collects data: only available for some of our resources

-Accuracy depends upon availability of data from vendors

-The FTE of the target faculty areas could be used. E.g. No use comparing ABI usage to Georef usage unless you consider you have 7000 business students and 300 geologists. Need to factor in cost when using this to measure effectiveness or value.

4.5-IT Division could collect

-Library workstations are only a part of the University provision

-Probably can’t be measured. Also, off-campus usage is so significant that this stat not relevant.

-Currently unable to measure as individual login by students not required.

-Not useful because of the high proportion of distance education students

-Best to consider computer access within the libraries in conjunction with campus wide access from other service points such as central labs, faculties etc, considering the mission is the take the digital library to the clients.

4.6 -Confused by mixture of library and ITS-owned workstations and varying software available on each type

Editorial note:

Note the apparent ambiguity about what it means to collect particular statistics – e.g. available but not used or organised seems to be one meaning. Another is that because we collect it, we must logically regard it as important. Some libraries have apparently assumed that because it is collected it ranks high in the “should collect” category, and have not indicated this specifically.

SELECTED E-METRICS PROPOSED BY VARIOUS OVERSEAS PROJECTS OR CONZUL MEMBERS , OR CURRENTLY COLLECTED

Source

/

AU

/

AUT

/

HU

/

PU

/

WU

/

CU

/

LIU

/

DU

/

Comments

Patron Accessible Electronic Resources

A / R1 Number of electronic full-text journals / * / * / * / * / * / * / * / * / In NZULS. ARL exclude free titles
A / R2 Number of electronic reference sources / * / * /  / * / 
A / R3 Number of electronic books / * / * / CP / (*) /  / ARL exclude free titles

Use of Networked Resources and Services

A / U1 Number of electronic reference transactions / CP / CP / Equinox counts differently
A,E
C / U2 Number of logins (sessions) on electronic databases / *? / *? / * / *? / ? /  / * / PU counts “links” to paid e-resources using Ezproxy. LIU, DU- yes where provided by publisher/vendor.
A,C / U3 No. of queries (searches) in electronic databases / * / *? / *? / * / * / Where provided by publisher/vendor.
A,C,E / U4 Items requested in electronic databases / *? / *? / * / Where provided by publisher/vendor. Includes Citations, abstracts, ToC
C / Full-text requests from electronic databases / *? / *? / * / * / Full text only (PDF, HTML etc)
E / Number of remote sessions on electronic services / ? /  / CP
A / U5 Virtual visits to library’s website and catalog / * /  / CP / Dependent on server software. Is the catalogue a database?

CU

/

Web page hits

/

*

/

*

/

*

/

Accurate & meaningful measurement is difficult.

Expenditures for Networked Resources and Related Infrastructure

A / C1 Cost of electronic full-text journals / * / CP /  /  / How to deal with print+e subs?
A / C2 Cost of electronic reference sources / * /  / 
A / C3 Cost of electronic books / * / CP / 
A / C4 Library expenditures for bibliographic utilities, networks, and consortia / * / * / *
E / % of total acquisitions $ spent on e-resources / * / * / (*) /  / 

Performance Measures or Ratios

A / P1 Percentage of electronic reference transactions of total reference / CP / CP / CP / Different definitions for A and E

E

/

% of information requests submitted electronically

/

CP

/

/

CP

/ Different definitions for A and E
A / P2 % of virtual library visits of all library visits / CP / Highly dubious ratio!
A / P3 Percentage of electronic books to all monographs / * / CP / 

E

/

Database logins (sessions) per FTE

/

/

/

/

Not yet standard between vendors.

E / No. of library computer workstation hrs/FTE / CP / CP / Why count? Library PCs only – not a useful indicator
CU / Library computer workstations per EFTS / * /  / *

A = ARL Data collection manual… C = COUNTER project E = Equinox project * = Currently collected  = Could collectCP = Can’t provide

CAUL E-metrics Survey - Final Results 12/2/03 1