Application Architecture’s application architecture wraps proprietary high-performance machine learning and prediction into an innovative, fault-tolerant modern technology stack. The base edition works in a public cloud using containers for secure and private front-end, computation and internal storage. The enterprise edition is extensible to virtual private clouds and parallelized compute infrastructures.technology_stack_website.001.png

How Applications Work

When a application is first created, an instantiation of the entire stack is generated using a preconfigured learning and prediction workflow for the specific domain, such as customer support automation. Through open API and OAuth, and SSL connections, SaaS- and DBMS-based data stores are seamlessly ingested, transformed, and joined with external data to tease out thousands of signals. Our proprietary machine-learning framework automatically learns the informative signals as it builds a customized predictive model. As new data stream in, predictions are easily pushed and consumed by downstream SaaS systems that your business already uses. And as new data arrives, the models are updated, so they are always up to date and the most accurate.


Data Ingest Each Application requires some amount of historical data to train a model. This requires an inspection of the information about several events in the past and the observed outcome of each event. To predict which customer is likely to require urgent attention soon, your historical data might be a set of support tickets and the level of priority to each ticket manually assigned by support-desk staff. In this case, using OAuth credientials for your support-ticket system the Application automatically pulls over and inspects the reqsuite data. During the training phase data is temporarily stored in a private database. We also augment and enrich a company’s own data with 3rd party data that can be predictive of the questions being asked, such as public geolocation information and public company records.


Computational Engine Ingested historical data and 3rd party is transformed and joined using our own algorithms running on a parallel (multicore) machine. For enterprise editions have the option to use our parallel Spark-based compute environment over many machines. Signal-processed (“featured”) data is run through our parallel machine learning framework where the optimal model and signals are selected and saves.


Prediction As new data arrive to the application (either through an external push or periodic pulls), it undergoes the same transformations and joins as with the training data. The ML model is then applied and a prediction is made. This prediction and related insights are cached as they are surfaced to the end consumer. In the base edition, the predictions are directly integrated into the SaaS products a company is already using. In the enterprise edition the consumer may be another production-level framework that is custom to a company, in which case the prediction is made through our RESTful API.