ERDAS APOLLO - 2018 Product Release Details
ERDAS APOLLO is a comprehensive data management, analysis, and delivery system. It enables an organization to organize and deliver massive volumes of data, and consistently delivers geospatial data faster and with less hardware than competing server-based products.
Developer friendly RESTful APIs enable easier application integration
ERDAS APOLLO's extensible platform lets you build custom applications on top of the framework using RESTful APIs for faster development and easier integration into today’s technologies. RESTful APIs use simple HTTP requests rather than requiring Java programming experience. RESTful APIs are easier to integrate into existing software packages. The exposed ERDAS APOLLO web services are better integrated into various applications by allowing only those modules needed for a customer’s particular needs or workflows to be integrated.
ISO metadata updates enable capture and validation of metadata according to profiles
With the latest upgrades to support ISO/TS 19115-3:2016 standard as well as ingest 19115-1 and 19115-2 documents, support for profiles of that standard have been added, including North American Profile and ANZLIC. This allows for the capture of any necessary metadata as well as the validation of the data according to the various profiles. This validation tool verifies all field values are in accordance to the profile definition.
Catalog query performance is significantly improved
Catalog query performance improvements include:
- General performance improvements with SQL Server (2-3 times improvement)
- Hibernate optimizations (2-7 times improvement)
Support for Single Sign-On enables Windows Authentication
With support for Single Sign-On implemented in user management, administrators can configure ERDAS APOLLO server to use Integrated Windows Authentication.
Scalable crawler triples to quintuples performance time
Up to now, the ERDAS APOLLO crawler has been performed in a serial manner. This latest release introduces a new “scalable crawler” to distribute jobs across available CPUs. This scaling works across cluster configurations as well. Current testing has shown 3 – 5 times performance boost in time elapsed in seconds.