We are growing and, more than ever, we are hiring talented software engineers. Lokad is facing an impressive set of technical challenges. Thus, when engineers ask us what will I do at Lokad?, we can return the question with what would you like to do with us?. Let’s have a quick review of the Lokad platform’s primary functional areas. If you apply to Lokad, please let us know where your interests lie.
Interested in Lokad? Remote positions are welcome.
Apply by sending your resume to contact@lokad.com.
The Mission (an engineer’s perspective)
Supply chains remain wasteful and inefficient. We’re talking about roughly 15% of the worldwide economy: supply chains are vast, and double-digit improvements remain possible. We want to put supply chains on AI autopilot, and deliver above-human performance while doing so.
The Basics
Our codebase is C# / F# / TypeScript / .NET. We are using Git and Visual Studio. Lokad is hosted on Microsoft Azure. We are mostly agnostic of the OS, and operate under .NET Core and Linux. We have thousands of unit tests, and a robust end-to-end continuous integration, where we can redeploy everything in a single click.
We score 12/12 on the Joel Test but with a few variations that we believe to be superior (keep in mind that this test was written 20 years ago).
Do new candidates write code during their interview? Yes but we also ask candidates upfront to send us a remarkable piece of code of their own choosing, written by the candidate obviously. Then, we spend a portion of the interview challenging the candidate against this piece of code.
Do you do hallway usability testing? This certainly works when you are developing a web app. When it comes to the design of a compiler and of a deep learning data pipeline, well, not so much. However, we do massively dogfood our own stuff, as our own supply chain scientists happen to be the most active Lokad users.
Event sourcing
Tired of developing CRUD apps coupled with SQL databases? If you’re not, you should be. Event sourcing represents a superior alternative in practically every single way. It’s easier to maintain, more scalable, more secure. The primary downside remains the software community’s lack of familiarity with this approach. Lokad’s core is entirely built on top of event sourcing, and we do not have SQL databases in it.
If you want to learn how to craft cloud apps based on modern principles, join us.
Storage Layer
We have organized all our clients’ datas as a sort of Git repository; except that some clients are pushing files as large as 100GB per file. Obviously, we are not using an actual Git back-end, we have rolled our own storage layer on top of the Blob Storage of Microsoft Azure.
Having a Git-like storage layer is important for reliability and reproducibility. Pushing giganormous flat files to Lokad takes time. You do not want to expose your clients to the risk of processing a halfway written (or overwritten) flat file. Through Git-like semantics, a file is either there or not there; no unsafe state of the file ever gets exposed to the rest of Lokad.
If you love Big Data, then this is the place.
Compiler, front and back
Envision is our home-grown DSL. It is used intensively by the team of Supply Chain Scientists. This language is simple and yet, it delivers tremendous productivity when tackling supply chain optimization challenges. As a whole, Lokad is a platform designed to create and run Envision scripts.
As all the Envision codebase lives within the Lokad platform, we have the opportunity to automatically rewrite the existing scripts as the language evolves. Hence, we do not live forever with every single design mistake we’ve done. We rewrite and move on. For a compiler engineer, this represents the opportunity to work at an incredibly fast pace.
Have hard-core compiler skills? We need you.
Computation grid
Our compiler targets not a machine but a cloud. Envision has been designed for large scale parallelism. We have developed our own computation grid, somewhat similar to Spark. Yet, by taking advantage of the supply chain data patterns, we frequently achieve from 10x to 100x speed-up over generic approaches - all computing resources being equal.
Taking full advantage of a cloud of machines is a game changing paradigm. Distributed computing is tough but even embedded systems are likely to be distributed systems in the future. We want Lokad to scale to the largest supply chains, and it will require massively distributed computing.
You may know Spark, but are you capable of building a better Spark? If so, join us.
Machine learning
Forecasting is a cornerstone of supply chain optimization. Our latest forecasting engine is already built based on differentiable programming, which can be seen as a descendent of deep learning. We sometime use the “classics” of deep learning . However, our use cases are radically different. While the large majority of the deep learning community is focusing on media (image, text, sound), we focus on supply chain data.
Deep learning at Lokad is not about data munging with the latest open source toolkit of the month. It’s about revisiting the foundation of statistical learning to deliver superior results for an entire industry. Our challenges are about inventing new algorithms, rolling our own primitives whenever needed.
You may know TensorFlow, but are you capable of building a better TensorFlow? If so, join us.
Solvers and other algorithms
Supply chain optimization involves, well, quite a lot of optimization algorithms. Most of those problems could be referred to as integer programming problems, however, in supply chain, we are routinely facing situations with millions of variables. Traditional branch-and-bound approaches are somewhat underwhelming to tackle those situations, thus we have rolled out own specialized solvers.
While our optimization capabilities are still nascent, powerful solvers capable of processing the stochastic variables, probabilistic forecasts among others, are part of our roadmap. We want to be able to scale to problems that involve tens of millions of variables, and tens of millions of constraints. This will be needed to address the needs of the largest supply chains.
You like tough algorithms. You think that performance is a feature. So do we. Join us.
Composable dashboarding
It’s easy to design a dashboard that looks good if you can set the size of each tile and have datasets hand-picked to be pretty. However, Envision lets end users programmatically generate dashboards that contain many kinds of tiles (charts, tables, KPIs, etc) with arbitrary sizes and positions and displaying real-world data, with all its kinks and outliers and edge cases, and yet, it should still look good.
Did we mention that our dashboards are fast? They are. Typically rendered on the client side in less than 500ms even when facing complex dashboards. Crafting nice looking but also practical dashboards is difficult - to say the least. We have already put infrastructure in place to deliver this performance, but we have a long journey ahead of us to make the most of it.
Are you capable solving our meta design problem? Join us.
Mashups and integrations
Supply chain optimization is nothing without data. Increasingly, the whole supply chain lives in the cloud. Apps like the ERP / WMS / OMS are also SaaS just like Lokad. Through their APIs, these apps expose data that is critical in fulfilling Lokad’s mission. Thus, Lokad needs to support a growing ecosystem of integrations.
Integrating third-party data can be seen as mere plumbing. Yet, doing it right is usually a challenge. We need to retrieve the entire historical data in a way that is both incremental and reliable. We need to manage many failure modes. It’s pointless to blame apps for not achieving 100% uptime, nobody does. Instead, we craft strategies to make the most of the uptime that is granted to us.
Want to socialize with half of the B2B software ecosystem? Join us.