Questions Submitted Regarding Software Performance Evaluation for the Evergreen Integrated Library System

Will multiple vendors be working together on this?

We may ultimately work with multiple vendors on this project. Much of this question depends on the proposals we receive. I expect we'll probably receive proposals that take different approaches and that might dovetail with each other. One consultant may be strong in evaluating the PostgreSQL layer while others might propose looking at other parts of the architecture. If we ultimately decide to contract with multiple vendors, we expect the consultants to work with each other as well as the larger community.

I also want to clarify that we don't expect development to be part of this contract. We see development as being something done in a later phase of the process after seeing the results of an evaluation. 

Can I have more detail on the "work with the community" piece of the request?

We understand that the result of this performance evaluation may be a recommendation to make major changes to Evergreen, such as replacing the staff client or to changing a key piece of the underlying architecture. As such, it is important that the overall community, particularly the developer community, is involved as early in the process as possible, preferably starting with the planning stages. Proposed changes to the software require the support of the developer community, and it will be difficult to gain that support if they are not part of the process. In early discussions with the developer community regarding this project, the developers have also indicated that they hope communication regarding this project will be early and frequent.

Although MassLNC will be contracting with the vendor/consultant, partially funding the evaluation, and, if needed, providing an environment that can be used for the evaluation, we ultimately view the Evergreen community as the client for this project. With that in mind, just as you might share proposed testing methods with a client in the planning phases of the project, we would expect you to share that information with the community at large. If there are technical questions about the Evergreen infrastructure, we expect those questions to be sent to the community through the existing communication channels. Any status reports should also be shared with the overall community.

There are already communication mechanisms in place to support this communication, including the Evergreen general and development listservs, the Evergreen IRC channel, and the Evergreen wiki. The developers also hold monthly meetings in the IRC channel where plans or progress reports for the evaluation can be shared. If the consultant has other ideas for ways to facilitate communication between themselves and the community, those ideas can be shared through the proposal.

This isn't to say that MassLNC will take a hands-off approach to the project and communication. We will actively work with the consultant to coordinate this communication, to determine best methods for reaching out to the community,  and to nudge the community for responses if there are none forthcoming. We have technical staff that are committed to answering technical questions posted through the community channels. In  cases where community feedback is conflicting, ambiguous or just non-existent, MassLNC will take the responsibility for being the final decision maker.

Do you have an expectation of timeline or hours your requesting? 

The projected budget for this project is $50,000. We would like to hear from the vendors regarding the hours they believe are required for a project of this scale. Having said that, a large portion of this project will be supported by grant funds that need to be encumbered by September 30 and spent by December 1, meaning the evaluation should be done by November 1, if not sooner. We would like the evaluation to start in mid-May almost immediately after the consultant(s) is selected.

Are you wanting this to be full time for a short period or lag over a longer period to give others more time to participate and review?

We see the performance evaluation as a critical project that is preferably done in as short a time period as possible. However, potential consultants do need to build in time to to share with and get feedback from the community. MassLNC will work with the consultant to ensure any needed feedback is obtained in a timely manner.

As far as having someone look at the system, how do you see this engagement rolling out; would we be able to gain access to a production system where we can do analysis under load, and perhaps work with the on-site staff to set up specific monitoring, or would the work need to be more of an artificial benchmarking of the application?

We can provide access to a production system under load with the caveat that the use of any monitoring tools would not affect system stability.  Access to the production system will require signing of a confidentiality agreement. At the same time, MassLNC is prepared to set up a test server with a production-level database and any recommended tools if such a server is recommended. We also have staff that is willing to create scripts to mimic transactions if necessary. We'll rely on the consultant's expertise to recommend the best environment for performing the evaluation. 

It is understood that MassLNC will set-up an environment for use in the performance evaluation.  Will this environment have any shared components with other MassLNC applications or test/dev environments?  Or will it be completely segregated?  The follow-on question is - will we be given full control to make changes to all test environment components possibly to do some small proof-of-concepts without impacting other systems.

The system will be completely segregated from other test environments.The consultant(s) can be given full control to make changes if it is needed.

Is there currently any instrumentation / performance logging data from the production environment - whether it be from OpenSRF, database systems, Apache, or other components?

There are logs from OpenSRF, Apache, the database etc. available. I'm not sure how much performance information they contain, but I know that in the case of Postgres and Apache the log style can be tweaked to include transaction durations in addition to timestamps. I don't think OpenSRF logs can be modified in this way, but the logs there do contain timestamps for each entry, so durations could be calculated from the log entries.

The public-facing OPAC has also been instrumented with debug timing, which, when enabled, can be used to identify performance bottlenecks. More information is available at

Nagios and Munin are also installed in the production environment that will be available for the performance evaluation.

Under the Database section it is noted that relevancy settings for searching were discontinued in the database because of performance.  Is the performance of the public-facing OPAC now still an issue?

I want to make a clarification that the three MassLNC consortia never implemented the relevancy settings because they had already been identified as a performance issue prior to our migrations to Evergreen. Other Evergreen sites interested in this performance evaluation did use the relevance settings in previous versions of Evergreen. After disabling them, they have seen performance improvements.

In assessing OPAC performance that is unrelated to search, the one area we have been able to identify as a performance issue is retrieval of records that have thousands of copies with monographic parts. An example record is available at Retrieving a similarly-sized record where the copies do not have parts does not encounter the same problem.

In assessing search performance, there have been some issues with search, but we have not yet been able to pinpoint if the problem is with performance or with bad search methodology.

Since the OPAC is the system’s public-facing interface, I think we would always welcome speed improvements in this area. However, the performance issues in search are not as problematic as performance in other areas of the staff client, particularly with patron searches, check in/check out, and acquisitions.

Having said that, there have been examples where the continuation of or addition of search features has been unsuccessful because of resulting performance issues. Improving relevance in the search results is something we would like to see, but we have disabled one means of doing so due to diminished performance. There was also a recent attempt to move to pure-SQL search that had to be reverted due to more frequent timeouts (more information is available at Finding ways to reduce the cost of search so that things like improved relevance could be implemented would be welcome, though not the highest priority of this evaluation.

In researching Evergreen we found that initial implementations at a couple of libraries had performance issues for the public-facing OPAC.  Was this the case with MassLNC's deployment? If so, what components were involved and what steps / changes were taken by MassLNC or the Evergreen developers at the time to mitigate the problem?  Other than the removal of relevancy settings in the database.

I’ll focus on the C/W MARS implementation for this question since they did indeed see poor search performance shortly before going live. In addition to removing the relevance settings, C/W MARS took the following steps to improve response time:

  1. Upgraded from a 32-bit kernel to a 64-bit kernel.
  2. Set the following in /etc/sysctl.conf:

    kernel.shmmax = 34359738368
    kernel.shmall = 8388608
    kernel.shmmax = 34359738368
    kernel.shmall = 8388608
  3. In the opensrf.xml file, C/W MARS commented out the default_preferred_language and default_preferred_language_weight settings and restarted services. This tweak resulted in the biggest improvement to performance.
  4. More recently, C/W MARS has moved to GIN indexes.

How many users exist for the Staff Client vs. OPAC and is there any data on peak concurrency of Staff Client users?  Across the 170 branches, what is the average number of library staff using the Staff Client?

C/W MARS has approximately 1,600 staff users and 1,138,000 public users. A rough estimate is that there are between 1,100 and 1,200 staff accounts logged in at peak times.

On average, there are about 10 staff accounts per branch. However, this number varies widely from branch to branch. The smallest branch at C/W MARS has one staff account while the largest has about 100. That largest branch may have about 30 to 40 simultaneous users logged in during peak periods.

According to Appendix B, the Staff Client allows library staff to, among other things, "...create and retrieve statistical reports".  Is the data source used for these statistical reports the same as the transactional data accessed through the OPAC?

Many Evergreen sites use Slony to replicate their database as a read-only database. In the case of C/W MARS, the replicated database is used only as a data source for statistical reports. Other Evergreen sites may use the read-only database both for statistical reports and for OPAC searches.

According to Appendix B, "...the OPAC is also embedded in the staff client."  Does this mean there are two sets of OPAC code, one embedded within the staff client and one that serves as a web application for public users?  Or does the staff client simply access the web OPAC application and display it within its front-end?

It is just 1 codebase. The staff client sets a context variable in the template toolkit module to indicate that the user is running from the staff client, ctx.is_staff will be true in that case. The template toolkit code will do something special when appropriate if that variable is true.


Syndicate content