White Paper:
Leveraging AirVision Technology For Modern Air Quality Data Management
Traditional air quality data management systems including the existing E-DAS ATX system, follow an architecture of a dedicated central management server polling data from remote data loggers over a telemetry link, and providing a platform for networked users to review, quality assure, and report the data to USEPA.
Since the 1990s, technology progress offer improvements in virtually all elements of these systems. AirVision, which was designed and developed in the 2006-2009 period and released in 2009, represents a significant departure from the old approaches, and can be utilized as a platform to create a truly modern air quality data management system, at substantially lower risk and in a shorter time frame than a full custom implementation, currently being considered by the State of Texas.
AirVision Technology Base
AirVision is a .NET 4.5 application that can run on a regular server or virtual server environment, using any operating system from Windows 7/Server 2008 through Windows 10/Server 2014.
Agilaire elected to orient the product around the MS-SQL server (rather than a multiple database platform) for several reasons:
- MS-SQL offers the only truly reliable approach to generic universal IDs (GUIDs). GUIDs ensure that IDs for new records are unique for all time, which is critical to the capability to archive data out of the database and import it later, without any possibility of duplicate key conflicts. While it is possible to ‘hack’ Oracle and create a GUID function, it has been shown that this approach can still result in duplicate keys and result in a critical failure to import historical data in the future.
- MS-SQL offers a sophisticated Sync Framework, which allows the database to be duplicated and synchronized for various architectures, such as regional office copies of the database, or even to allow synchronization of configuration settings between PC-based data loggers at the sites and the main DMS server (configuration mismatches have been a historical issue between the field data loggers, regional offices, and the central DMS, and can occasionally cause large periods of data to be invalidated when a mistake is made and not identified).
- MS-SQL offers the best platform for .NET application development, allowing for complex applications, forms, reports, and new functionality to be developed rapidly and at low cost.
AirVision also offers an off-the-shelf plug in to allow direct data transfer of preliminary and quality assured data to Exchange Network servers using the AQS XML data flows (Version 2.2, with commitment to support future versions as they come out). AirVision connects to the OpenNode2 plug-in using a web service that runs in a thread as part of the regular AirVision Windows Service, without the need to run IIS on the AirVision Server.
AirVision, having broken the traditional ‘data logger’ architecture model, offers the unique capability to directly collect primary data from analyzers that have their own on-board data storage and communication capabilities, like the BAM-1020 and TEOM real-time PM samplers, Partisol 2000/2025 FRM samplers, and even gas analyzers such as the API and Thermo series. While the data logger still serves an important role in handling legacy and basic sensors (met sensors, older analyzers) and managing on-site automation of calibration and QA checks, it is not unreasonable to imagine a future system where the DMS server directly communicates to all field sensors over a broadband link, and no field remote data device exists at the shelter. AirVision is currently architected to implement such an architecture, once the technology of the analyzers ‘catches up’ (e.g., no met sensors have the capability to buffer averaged data, the reliability of on-board storage and the robustness of communication protocols still has a long way to go) and sites are upgraded to fully digital devices with on-board storage.
< Current Methods
< Future Methods
AirVision currently utilizes a dedicated (e.g., ‘thick’) client for user editing, reporting, and system management, primarily to offer the user the richest possible user interface experience for complex activities such as data QA/validation. While it is possible to build a web-based tool for this function, the tools available in 2008-2009 would have required significant sacrifices to be made to the user experience, so the decision was made to delay the implementation of a browser-based client. This proved to be a smart decision, as the technology that looked the most promising then (Silverlight) will soon be orphaned. Other new options exist for development, but it is difficult to say which technology will still be supported 10-15 years in the future (e.g., Flash recently announced obsolescence date).
Nevertheless, Agilaire also offers of a web-based client for report generation, field data entry forms for some agencies,and most system functions (file import, data QA).
Field Remote Data Logger Technologies
Currently, most state and local air quality agencies, use a dedicated data logger device designed around an embedded platform. These devices are designed for stability of operation (software runs from EE/EPROM), high reliability (MTBF > 17 years) and long life cycles (10-15 years).
Agilaire offers its Model 8872, a PC-based data logger. This device uses a fanless PC with SSD drive for high reliability, with an interface and database common to the AirVision platform. A background service provides the real-time data acquisition, averaging, and primary validation capabilities similar to the Model 8832, with a high degree of digital connectivity to instruments and analyzers. As mentioned previously, this device offers bidirectional synchronization of configuration settings to eliminate errors from inconsistent settings between the field remote and the DMS.
Integration with Existing TCEQ System
In the long-term, it may be the goal of TCEQ to move from embedded data loggers (ZENO) to the Model 8872, but in the short term, integration of AirVision with the ZENO logger network will likely be required.
Fortunately, ZENO documentation indicates that elements of the CCSAIL protocol, in particular the “DB” command, fit very well into AirVision’s existing Direct Polling framework, and can likely be used for collection of average data values and most flags without any software modification. This function could be immediately tested with temporary access to a ZENO unit on a public IP address.
That being said, different vendors have different approaches to presenting other data types, such as calibration data and alarm data. Agilaire would need to review the current ZENO implementation and flagging, but AirVision is designed with the modularity required to easily develop ‘plugins’ for managing different incoming data formats and structures. For example, Agilaire recently added an HTTP / JSON expansion to the Direct Polling framework for the API T640 particulate monitor, in a way that was not disruptive to the existing framework for other instruments.
Another common issue for agency DAS/DMS replacements is the issue of report formats. I have attached a list of example formats for AirVision, but Agilaire commonly creates a few new report formats for customers to ease the workflow and transition.
Finally, there may be questions or needs related to TCEQs extensive and sophisticated Air Toxics network. Agilaire recently created a framework of data entry forms for the State of Georgia to manage both field entry of run data for air toxics, as well as for tracking calibration data and slope/intercept data for those devices. Agilaire made the development using a flexible framework, so this work should be a good starting point for any customizations that TCEQ requires. Another document is provided showing example data entry forms has been included with this document. We would enjoy having a more detailed discussion of the air toxics workflow and requirements at a future date.
Appendix: Other AirVision Technology Considerations
- Database
- Database schema tracking table indicates any modifications that have occurred to the db schema. This can provide a means to determine what modifications a user has performed after the system has shipped. Each in house schema modification also gets associated with a schema version which gets validated by the Data Access Layer (DAL) to ensure consistency.
- Separate reporting schema provides a separately securable contract with the customers where they may write reports against and retain schema independence.
- Smart data based clustered indexes on high volume tables to optimize data retrieval.
- Data Access Layer
- Base data access layer model generated via LLBLGen ORM tool utilizing adapter mode. We have also added additional code generation data access templates and extended the existing ones to speed development.
- Default validation routines ensure common issues such as: ensuring non nullable values are supplied, string lengths are observed, date ranges are valid, type checking, foreign key checks, etc..
- Change and delete tracking maintained on entities and entity collections to allow the data transfer of minimal sized change sets across the wire.
- Data access calls were abstracted to interface allowing a single entry point that can be hosted up on the server for proxied data access.
- Client data access gets compressed on the fly to and from the server to minimize traffic. This helps facilitate WAN and internet access where bandwidth is critical.
- Site level security gets checked/filtered at the DAL level to ensure users have permissions to access sites and related data.
- Support for proxied stored procedure calling across application tiers for both action and retrieval style stored procedures. This was implemented to reduce the overhead of data intensive operations that are best performed at the database.
- Allows clientside as well as serverside queries in a type safe way (no nested SQL strings in code.) Also supports different shaped entity graphs to be passed to the client from the same dal interface. Between the two of these features, it allows new client forms to be added without modifying the server.
- As all data access is done in a type safe manner it makes us very agile towards database modifications. We immediately know the impact of a change upon regeneration of the dal and can quickly adapt code to conform to it.
- Server
- Serves as a container for services. These services may be loaded dynamically at runtime based on configuration.
- Hosts communications portal that allows clients to communicate with server as well as inter-server communication.
- Proxies data access to clients avoiding the need for client based database connections.
- Delegates polling/device requests when there are shared resources. If for instance there is one com or modem that both instrument polling and data logger communication utilizes it will queue access appropriately.
- Can deal with temporary database interruptions gracefully.
- Utilizes a standardized logging system throughout.
- Multiple servers can utilize the same database for storage for redundant operation.
- Service Components
- Data Logger Polling Service
- Task Scheduler Service
- Direct instrument Polling
- File Import Service
- AQS XML Service for Exchange Network Plug-Ins
- ADVP Service
- Checks averaged data for specific conditions and triggers actions based on those conditions. Those conditions may be based on things like: data flags, percentage variation, data annotation content, value limits, variation from historical averages, cross site comparisons, etc. Once those conditions are found the advp rule will trigger notification alarms via email, and can flag/update data accordingly.