Tech Stack ⚙️

take text from here:

https://docs.google.com/document/d/1H4lHI2dM0Hdef-lsyRk1y5BTekPmLAX2T82FUtOmpc0/edit

Frontend

Our Ultimate dashboard, where our customers are configuring their bots and manage the conversation flow, is built as a Single Page Application with React and TypeScript. 

The application is bundled via Webpack + Babel and makes heavy use of Material-UI components based on a custom Ultimate.ai theme. Our Dialogue Builder, where our customers edit the conversation flow, is built on top of D3. That allows us to draw a beautiful conversation tree which makes it easy for our customers to understand what is happening during a chat.

We talk to the backend APIs via REST with the help of Axios and use OpenAPI specs to make sure we understand each other. Redux helps us to make sure all React components have access to the required data. We are currently preparing the move towards GraphQL and Apollo which will replace our current REST and Redux implementation soon.

We constantly try to improve based on input from our developers and improve our Dashboard with the help of feedback from our customers.


Backend

Our backend chapter believes in micro-services. We would like to scale up our system as freely as possible. The main flavor of our services is Typescript with Express. For our persistence storage, we chose MongoDB and use Elasticsearch to search through large volumes of data quickly. Our main data set has over 200 GB of data so get ready to experience some scale. 🚀 We use a variation of REST as the main means of communication between services, and pub/sub topics for more async functionality. Observability and ownership are very important concepts for us, we use Sentry, Grafana Dashboards, Pingdom, Elastic, and Argocd to monitor our system.

We always strive to take a modern and pragmatic approach to system design, we aim for loosely coupled solutions that scale. There are multiple active initiatives around increasing isolation between workflows and services.

Machine-Learning

We take the advantages of Tensorflow machine learning framework for creating AI models and the model trainings are performed on our own AI model training system which utilises dynamically provisioned Google Cloud VMs and Redis queues. This schema allows us to run concurrent AI model trainings without tackling scalability issues.

In addition to model training, we have many Python-based micro-services. Those services make use of Tornado and Flask frameworks and carry out various AI tasks. Besides essential AI functions like training and inferencing, our machine learning stack includes auxiliary functions for search. To this end, we also make use of  ElasticSearch. Thanks to the micro-services approach, we are able to scale various AI services independently with respect to the needs. 

DevOps

Being part of the Platform Engineering team at Ultimate.ai means we will strive towards improving efficiency and resiliency of the platform as a whole by leveraging Infrastructure as code, creating self service platforms and eliminating toil. We also focus on helping developers build resilient , robust and highly scalable micro-services and deliver infrastructure for enabling highly reliable applications and rapid application development.

All of our workloads at Ultimate.ai are containerized and we run them on top of our swarm clusters which are hosted on Google Cloud. Our CI/CD pipelines run on bitbucket. We use ELK stack for logging and grafana/moira for monitoring and alerts. We are now actively migrating to kubernetes.

AI

AI research uses a wide variety of different technologies, libraries and models to find the best solution for our business problems. The core tech stack is the same as in machine learning (Tensorflow, Python data science stack) and a lot of the code base used for our experiments is the actual production code for consistency and easier hand off from research to product.

The researchers are not limited in what technologies and solutions they use and explore but instead we are more interested in the results and getting our hypotheses tested in a quick fashion. Researchers make the calls about the technologies independently but also discuss the decisions with other members of the research team and with machine learning engineers to ensure continuity in research and compatibility and feasibility of production implementation.