Today’s financial markets generate a volume of data barely dreamed about in the early days of electronic trading. Every year, exchanges create a record-breaking amount of transactions, and we know that somewhere within all of that data there lies a digital treasure chest. Finding a way to analyze and deliver this valuable data is a multi-headed problem, roughly broken down between charting, historical trade display and research scenarios.
In my role as an engineering manager at TT, I’m part of a team that’s been working to solve this problem in the new TT platform. We think we’ve found the answer by leveraging cutting-edge technologies, Node.js and Amazon Web Services (AWS). I’m excited about our solution, which is now automatically available to all TT platform users. Read on to learn more about our approach and how it can help you overcome the multi-faceted big-data challenges we all encounter today.
Node.js
Node.js is a run-time environment built on the same technologies that power the web, namely JavaScript. It was built around a few simple ideas and has rapidly grown out of its San Francisco hacker origins into enterprise software used by huge firms like Walmart and PayPal. If you have ever worked at a large enterprise with existing legacy architecture(s), you probably know how hard it is to turn an organization of that size onto a brand-new, relatively unknown technology platform. The answer in the case of Node.js is, surprisingly, very simple. Node.js is arguably the best web service platform available today, even though it hasn’t even hit the 1.0 version mark yet. Coding is simple, performance can be faster than Java/C++ web server(s) and the platform is easily scalable with features like the Node.js Cluster API.
One of the new technologies that makes the TT platform possible is WebSockets, which we use to deliver real-time data to both our mobile and desktop users around the world. Writing a WebSocket server with Node.js is as simple as writing just a few lines of code. Check out this example from the popular Node.js “ws” package. This is literally all you need to run a WebSocket server in Node.js:
Doing something similar with the popular C++ library libwebsockets or a Java API like Jetty can turn out to be hundreds of lines of code. If you’ve ever used the über-popular library CuRL to make web requests from your C++ code, you know it also commonly results in memory management issues when handling multi-part replies.
Perhaps what’s most exciting about using Node.js is that we now live in a world that allows for end-to-end JavaScript development. Our “back-end” teams don’t have to schedule their work around the “front-end” guys because we all work on the same team. That’s not saying that Node.js JavaScript development is exactly the same as client-side JavaScript, but the jump between back-end and front-end work isn’t as far off anymore.Put that way, it would be crazy to build a brand-new analytics web service and not use Node.js.
Amazon Web Services (AWS)
With our current-generation analytics deployment, it was always a guessing game to figure out how much infrastructure we needed. We used to have to ask ourselves if we had requisitioned enough racks to provide a scalable solution that could provide real-time data delivery and enough storage capacity for the foreseeable future.
But that’s no longer true in the new architecture thanks to cloud technologies—specifically, Amazon Web Services (AWS). By co-locating our historical data and analytics within AWS, we can scale up operations when necessary and have access to unlimited storage. Our analytics server auto-scales via AWS Elastic Beanstalk and runs on data that resides wholly in AWS Simple Storage (S3).
In the long run, we will allow user-driven analytics to run on our servers within AWS. Customers will still be able to request smaller data sets and analyze them locally, but they will no longer need to transfer or house big data for the bulk of their research needs. As a result, many firms will be able to eliminate the cost and hassle of recording, persisting and running analytics on locally stored data.
You may have already seen or even used the charting functionality within our next-generation TT platform. If you haven’t, I encourage you to check it out.
With TT, users can chart all contracts that are currently tradeable on our platform, and we plan to add even more markets that will complement what is currently available.
Later this month, we will be rolling out more than 10 years of tick data for some markets. What this means is that simply by scrolling, you’ll be able to see every trade that occurred in the last 10 years for a specific contract, assuming it has been listed that long.
Of course, the technology powering this is our Node.js analytics server and long-term data store in AWS.
What’s Next
By leveraging these efficient technologies, we are providing a unique, cutting-edge analytics platform backed by a cloud-based data storage solution that can handle volumes of data once only dreamed about.
Going forward, we are continuing to build out TT’s analytics capabilities with things like yield pricing/charting, comparison charts, support for options and the ability to provide research capabilities via API access outside of the TT interface. We are also adding more historical data as we speak, and we’re excited that it will be available soon. Look for an announcement in the coming weeks.