The First 30 Days
The First 30 Days

Sep 1, 2004 12:00 PM
By Don Sturgis

Should one accept a physical security system being installed at a facility without first testing its operation? Shouldn't the system be tested for conditions encountered on a daily basis and also under the most adverse situations?

Previous articles in this series have covered Factory Acceptance Tests (FAT) and Site Acceptance Tests (SAT). Performance of these tests demonstrates that the basic system design is sound and that all system components are connected together and function correctly. Performing a 30-day Operational Acceptance Test (OAT) is intended to show that the system can operate as intended continuously, while maintaining the system up-time requirements. If there were any SAT punch-list items identified, they must be completed prior to an OAT unless there are truly valid reasons to proceed in spite of unresolved issues.

There are several dimensions of security system Operational Acceptance Testing that are unique to security and emergency response systems. This article will explain how the scope of security system OAT test plans is different in several ways from the scope of OAT test plans for business and information systems. While the customer's IT organization is involved in security OAT test planning, these differences should be brought to the attention of the test planners.

The Operational Acceptance Test addresses special issues requiring extended test periods, such as verifying false alarm rates and nuisance alarm rates. Other aspects of the system are reviewed throughout the OAT, and can be adjusted if necessary to meet required levels of usefulness. During the test it is common to have "errors" reported that are not actually errors, but are simply differences between the expectations of the system operators and users and what has been initially set up.

For example, it may be that the Accounting Department has people who start work earlier than the rest of the group. For them, a custom access level may be required. It can also be that there are unexpected conditions that must be taken into account. For example, during those times of the year when daylight begins late in the day or it gets dark early, headlamps from delivery trucks may be found to "blind" a camera, thus requiring the camera to be relocated to a nearby position. Camera brightness or contrast settings may need to be adjusted to account for a range of lighting conditions. These changes do not require restarting the test, but rather identifying what changes need to be addressed.

With many IT system projects, the performance, load and stress tests are included in the OAT. Because a security system goes live starting with the OAT, the full capabilities of the security system must be known prior to that point. Thus the Factory Acceptance Test, or more rarely the Site Acceptance Test, is where security system performance, load and stress testing is done. (See previous articles in this series, AC&SS, July and August 2004).

Unlike the previous FAT and SAT tests, the OAT goes beyond pure system testing. This is where the People-Process-Technology triangle comes into play. Systems are operated by people, in accordance with policies and procedures that govern security operations and how the system will be used in support of operations activities. Operational testing of the security system verifies that its operation fits well into the framework of related policies and procedures. Thus the OAT is the point at which the customer's security project team steps back from involvement in the system, and the operations staff fully takes over the system management and operations. Furthermore, the OAT is the test where customer personnel training is verified and expanded as needed.

Because the OAT is the key phase-over process from security project team to operation staff, it is important that prior to the OAT, standard procedures be defined for reporting system problems, and for escalating unresolved problems both within the customer organization and within the vendor organization responsible for ongoing service. These procedures must be consistent with contractual service level agreements, system uptime requirements, and risk management's interest in the system. Both customer and vendor personnel should perform a dry run of the various trouble-reporting procedures to make sure that there are no misunderstandings and to verify that the defined procedures are appropriate and workable. These are the procedures that are used to report and resolve problems that come up during the 30-day OAT.

A well-designed and executed Operational Acceptance Test verifies that the system and all of its related operational elements are in place and working. This results in satisfactory completion of customer turnover, and prevents future end-user nuisance support calls.

It is good to remember that no OAT can be 100 percent accurate, since not all conditions that impact the system can be tested, such as extremes of weather and temperature. If the OAT is being performed in the summer, it would be impractical to see how system elements are affected by extreme cold conditions; likewise if testing in the winter for the effects of extremely high temperatures.

As with the other types of acceptance tests, the key to a successful OAT is a good test plan. The Operational Acceptance Test plan should be prepared as a collaborative effort by the system provider and end-user. The plan should include, but not be limited to, an appropriate subset of tests from the factory and site acceptance tests.

The SAT will have tested every type of end-device, field panel, workstation, network component and the communication infrastructure, as well as the system as a whole. Now the OAT will test the system's performance under sustained operating conditions.

The nature of the OAT also calls for a test element that is not a part of FAT or SAT testing: operational test exercises. Here is where the security scenarios come into play again. Unlike business information systems, in addition to daily operations events, security systems must respond properly to events that one hopes will rarely or never happen, and which are not likely to occur during the OAT test period. Thus operational test exercises, performed by customer personnel, are scheduled to be performed during the OAT test period. The test exercises should be based upon the security scenarios that were defined in the RFP or prior to the FAT.

The system should perform in accordance with the approved operational test plan, within the agreed-upon performance standards under full operating conditions for a period of 30 consecutive calendar days. In the event of an error or malfunction ! including but not limited to equipment failures ! the system provider should make the necessary corrections as they are needed, at no additional cost to the end-user.

A single or even a few errors should not warrant a restart of the test, unless the required up-time has not been maintained or the vendor fails to respond within the service agreement.

If stopped and restarted, the OAT must continue until a full 30-day period of continuous operation is reached.

In the case that access control or ID badge issuance is part of deploying a new system, all personnel should be issued cards/badges before the OAT begins. Card/badge users should be instructed on their usage as part of the process. If the use of the badge or ID system is a significant change from previous procedures, a week or two should be allowed for personnel to become accustomed to the use of the system. If existing cards or badges are to be re-used with the new system, they should be verified as being fully operational prior to the OAT.

Normal operations personnel should not be saddled with special training procedures or preparatory processes required as part of getting the new system deployed; the security project team should attend to such actions. Prior to the OAT it is necessary to verify that:

  • the operational testing is being performed with the expected normal badging traffic (with an operational test exercise to check the worst-case conditions);

  • the badging procedures are appropriate and effective; and

  • the access levels (when and where cardholders may go) are correct.


Performance of daytime and nighttime tests on the CCTV systems can determine:

  • Is the nighttime illumination adequate?

  • Does the camera produce the expected results with the current type of lighting (halogen, sodium, mercury, fluorescent)?

  • Are the camera parameters proper? (Should a color camera be replaced with a color and black/white type? Is the lens selection proper?)

  • Are the camera control presets properly set?

  • Are the alarms coupled with the appropriate camera preset?

  • Is the camera placement correct or should it be relocated for better viewing?

  • Is wind vibration causing a jittery picture, requiring a different mount?


A perimeter fence intrusion detection system is intended to provide an alarm when a human being is compromising the barrier by climbing over or cutting through it. This is typically accomplished by detecting motion on the fence. Unfortunately, other activities such as animals (large and small) or wind-blown debris, may put a portion of the fence in motion ! these are "real" or positive alarms in that they were caused by a detected motion. However these are "nuisance alarms" because they were not caused by a human intruder. Usually the systems can be adjusted to what is optimal for the environmental conditions. Although initial adjustments are made prior to the OAT, continuous operation during the OAT may indicate that additional adjustments are needed.

Interior intrusion detection devices are less prone to ! but not immune to ! nuisance alarms. Many use dual element detectors employing microwave and passive infrared (PIR) technologies. Published data sheets claim that:

  • The detector provides a high tolerance to temperature changes ! caused by heating vents, moving curtains and heavy machinery vibrations ! that minimizes "false activations."

  • The detector "distinguishes between false alarms and actual intruders" and "automatically adjusts itself to the new conditions, maintaining sensitivity level and detection capability with virtually no false alarms."

These false activations or false alarms are really nuisance alarms caused by something other than a human intruder.

A nuisance alarm may require the adjustment and/or replacement of existing detection elements.

A 30-day OAT will provide time to experience how the detection functionality works and to determine whether or not changes are needed to reduce or eliminate nuisance alarms.

Over a period of usage, doors and gates can become misaligned or the closing mechanism can malfunction such that they do not fully close automatically. This may cause the system to report "door ajar" or "door open too long" messages. "Door forced open" messages may be generated when a request-to-exit (REX) device does not detect a person exiting a reader-controlled door or by doors "chattering" due to vibration or bounce. These are often called "false" alarms since they were caused by a portion of the system hardware not operating correctly, although an alarm like "door open too long" identifies an actual security vulnerability condition that exists.

False alarms are a result of a system malfunction, which requires immediate repair or replacement of the defective element.

Note that some alarms associated with door closing problems resolve themselves, for example, when someone uses a door that has not been closed completely and manually closes it. During the OAT test, it is wise to be alert for this type of alarm and to investigate its cause even though it appears to resolve itself. To leave such alarms uninvestigated could literally open the door for a future security breach. Unless such false alarms cannot be corrected within the contractually specified time frame, they should not be cause to stop the OAT test period.

If the system has standby power and battery backup provisions, proper operation should have been tested as part of the Site Acceptance Testing. However, during OAT, users should consider testing access control panels with battery backup provisions that service the largest numbers of card readers. After disconnecting the AC power from these panels, users should verify that the panels and card readers continue to operate for the amount of time indicated in the project specifications or vendor's literature. An additional purpose for testing the battery backup capability during the OAT is to make sure that the appropriate "AC Power Failure" and "Battery Low" messages are displayed, and that follow-up procedures will result in the power being restored prior to exhausting the battery power.

If the components of the physical security system are connected to the same communications network as the company's business system and other independent applications, it is mandatory that the scenarios be exercised that create the maximum communication traffic. Even though these tests were run during the FAT, only now, with the full system completely deployed, can they demonstrate that the existing communications network is capable of handling the unusually high traffic that emergencies will demand.

The advantage of creating security scenarios and using these as part of the testing process was discussed in an earlier article (see June 2004 issue). Since the scenarios basically define why and how the end-user plans to use the system, any scenarios that are departures from normal operating procedures must be tested using operational test exercises.

Given that a purpose of the OAT is to test operational procedures as well as the system itself, scenario-based testing could be performed at three different times:

  • during normal daytime hours;

  • during nighttime hours; and

  • over the weekend, thus making sure to exercise all shifts for shift-based staffing.

After determining whether or not system procedures are appropriate for each shift, they can be adjusted accordingly.

It is critical that system availability (uptime) requirements be clearly defined to prevent inappropriate restarts of the test period, and to prevent vendor-customer disagreements about test restarts. Server availability and system component availability may have different requirements. For redundant servers, the system availability requirement may be 100 percent. Fail-over time between the primary server and the secondary server would not be counted as downtime unless it exceeds the transfer time specified by the vendor. Hardware component availability should be less than 100 percent. For example, 99.9 percent allows 9 hours of downtime per year. However, in real life, component downtime is not distributed evenly across all components. Some components never fail; others may fail multiple times.

To calculate 99.9 percent availability for system components, it is necessary to first determine the number of component uptime hours for all components, obtained by multiplying the total number of components times 24 hours times 30 days (the OAT test period). Total component downtime for components will be 0.1 percent of the component uptime calculation.

As an example, let's perform this calculation for a system with 100 card readers, two badge printers, 50 CCTV cameras, and 10 computer workstations. That's a total of 162 devices times 24 hours times 30 days, or a total of 116,640 hours. The allowable downtime for components will be 116.64 hours (116,640 hours times 0.1 percent). That means a workstation can be down for 6 hours, two readers can malfunction for a total of 30 hours each, and two cameras can be out for 20 hours each (a total of 106 hours) and the system will still meet the uptime requirements. That also means that a single camera could malfunction repeatedly and be out for 100 hours (5 hours downtime for each of 20 outages), and the availability requirement will still have been met in terms of downtime. Thus there is a need to define a recurring problem as such and require replacement of the faulty item.

A single error can have a system-wide impact. What if a database glitch causes only a partial list of cardholders to be downloaded to the panels affecting all 100 card readers? Does full functionality exist at the card readers? No. If that problem is not resolved within 2 hours, the component downtime will exceed 200 hours (2 hours times 100). The test should be restarted.

Maintenance contracts often specify service response times and problem resolution times. These terms can be taken into consideration during the OAT. If the vendor servicing the system during the OAT will also be maintaining the system under contract, the maintenance personnel should be required to respond as expected.

The test clock should only be restarted when

  • a problem is not resolved fast enough for availability requirements to be met; or

  • the vendor's service response time does not meet the contractual requirements (if this is part of the test).

The customer always has the option of continuing the test even if these criteria are not strictly met. For example, if the vendor's response is 15 minutes past the required response time, the customer may elect to continue the test anyway.

It is helpful to work out how to classify problems in advance of the test with regard to starting and stopping the test. The chart on page 63 provides suggested categories. It is a good idea for customer and vendor personnel to brainstorm about example problems as an exercise prior to starting the test, to make sure that the thinking on both sides is about the same with regard to classifying problems.

Because the OAT is an ongoing test involving operations personnel and a "live" system, test planning and preparation (including training) are critically important. When vendor and customer personnel have already collaborated on FAT and SAT testing, the challenge of conducting a successful OAT lessens.

The OAT is also a good time to expand the organization's project knowledge base by writing a "lessons learned" document. Both customer and vendor can benefit from such review. A well planned and executed Operations Acceptance Test can provide a smooth conclusion to a major security project, even one that had a rough start. It is more than worth the effort.
Suggested Problem Classification Categories

SHOW STOPPER: Stop test and fix, then start clock again

MAJOR: Doesn't stop the test clock, but must be fixed before test can end.

MINOR: Test can be completed with this as a punch item

SPECIAL ISSUE: Requires investigation or collaboration to determine which above category should apply.

Don Sturgis, CPP, is a senior security consultant for Ray Bernard Consulting Services (RBCS), a firm that provides high-security consulting services for public and private facilities. This article is excerpted material from the upcoming book The Handbook of Physical Security System Testing by Ray Bernard and Don Sturgis, scheduled for publication by Auerbach Publications in the spring of 2005. For more information about Don Sturgis and RBCS, go to or call 949-831-6788.

  • To understand physical school security, study history.
  • Comfort Web InterfACE
  • Digital Networks Tie Together National and Local Security
  • Security for the golden years
  • Irrigation Works
  • Hidden Security Camera - Tissue Box
  • CCTV 4 Camera Quad Processor Dome Video Security System
  • Sensor Products
  • Digital cameras: yesterday, today and tomorrow
  • Peter Beare positions Ultrak to compete in a changing market
  • Broadcast Security
  • Security Camera News