ACC Ltd. provides customized solutions for financial institutions that wish to enter the world of cryptocurrency investment while minimizing their exposure to the market volatility. We develop data-tools for financial institutions working in Cryptocurrency market investment and risk management. With wide technological knowledge and hands-on experience in the Cryptocurrency space, we tackle the market’s biggest challenges using alternative and traditional data-sets, deep blockchain-layer research and advanced artificial intelligence algorithms.
Cryptocurrencies were first introduced to the world in 2009 and in a few years, they have become their own asset class reaching a market value of over $700 billion. Today, there are more than 2,000 cryptocurrencies, 1000 wallets, 250 exchanges and 300 different state regulations creating enormous volumes of data. Cryptocurrencies are designed to work as a medium of exchange based on cryptography methods, to secure financial transactions, control the creation of additional units, and verify the transfer of assets. Cryptocurrencies work through distributed ledger technology, typically a blockchain, that serves as a public financial transaction database.
The transparent nature of cryptocurrencies result in endless opportunities for real time alternative and fundamental market factors monitoring, which has the potential to fuel powerful AI models.
ACC’s Alto relies heavily on advanced AI models trained by 3 years of output history of over 30 cryptocurrencies. These models predict the lower and upper boundary likelihood of a price change of a cryptocurrency for a future timeframe and deliver this valuable information to customers securely every 12 hours.
As reduction in the training time is crucial for us, we looked at optimizing the training time and put forth this challenge to Intel engineering teams to help us with the same.
Working together with the Intel AI Builders engineering team we were able to optimize our AI model on Intel® Xeon® Scalable processors, utilizing Intel® Math Kernel Library (Intel® MKL) and Intel® Optimized Tensorflow. This optimized interaction between our technology stack and the hardware led to a reduction in training time, resulting in cost savings in our development process, and most importantly output frequency and accuracy for our customers.
Training & Benchmarking the Model
ACC developed an autonomous trading system for smart cryptocurrency investment. The model predicts fluctuations in the value and feeds this data to various other investment and financial systems. The model was trained using a network with 12 layers sequentially stacked using Keras. Training and Inference code was written using python script. Datasets werein .npy format, – a standard binary file format for persisting a single arbitrary NumPy array on disk, which stores all of the shape and data type information necessary to reconstruct the array correctly on any machine (including one with a different architecture).
Benchmarking was done using Intel Xeon Scalable processors as the inference environment. Intel® runtime library was used, which has the ability to bind OpenMP* threads to physical processing units. The interface was controlled using the KMP_AFFINITY environment variable. Depending on the system (machine) topology, application, and operating system, thread affinity can have a dramatic effect on the application speed. Modularizing the code ensured the existence of parallel regions, while using the OMP_NUM_THREADS environment variable enabled us to specify the number of threads to use for the parallel regions1.
Benchmarking has been conducted with changing various parameter values, fine tuning as part of optimization process
Best training time on Intel® Xeon® Gold 6252 processor using following hyperparameters
- export KMP_BLOCKTIME=0
- export OMP_NUM_THREADS=48
- export KMP_AFFINITY=’granularity=fine,verbose,compact,1,0′
- export KMP_SETTINGS=1
With this setup and configuration, we were able to reduce the training time by 205 seconds – a reduction of 39% compared to the baseline, as shown in the chart below2.
This is an outstanding result for us due to the impact in delivers to our production chain. Every second in model creation time equals five (5) hours in our production chain. So, the reduction of 205 seconds translates to saving 42 days for us, which is a phenomenal result!
At ACC we are on an endless mission of locating, tracking, and designing data-sets with relation to key factors of the cryptocurrency market. As we scale out our data-sets, the running time and computing cost of our AI models are also increasing, but thanks to the Intel AI Builders Program we now have the resources to optimize on the latest Intel Architecture, co-market our solution to promote it, and access to matchmaking opportunities with Intel’s enterprise end-user customers.
Learn more at www.accrypto.io
NEW: Tested by Intel as of 04/08/2019 on Intel AI Builder cloud Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (CLX), 2 socket, 24 cores per socket, Ubuntu 16.04.3 LTS ;Deep Learning Framework: Intel optimized Tensorflow 1.12
BASELINE: Tested by Intel as of 04/08/2019 on Intel AI Builder cloud Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (CLX), 2 socket, 24 cores per socket, Ubuntu 16.04.3 LTS ;Deep Learning Framework: Tensorflow 1.12
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Performance results are based on testing as of August 2018 and may not reflect all publicly available security updates.
See configuration disclosure for details. No product can be absolutely secure.