Preview: skHFT - Liquidity Flow for markets

This is a preview of ShezheaNETs Whitepaper regarding skHFT. Data will change.

This article is heavily simplified for easy understanding.

If you go to a fast food restaurant and fill out your order, you will receive, as described in the contract of purchase, the good of your needs. If you fill out an order in any open market, you won't receive what you ordered, but something different.

This article is only regarding Cryptocurrencies and not other markets.

Purchased Order:

Price: $100000

Received Order:

Price: $100300

Fee: $100

Difference in %

+ 0.3% in slippage

+ 0.1% in cost

Paying $100400 instead of $100000 might sound normal for the crypto environment, but is something obscure for both more traditional investors and algorithms. If you train an algorithm to automate trade, you have to count into something called standard deviation. A low standard deviation indicates that the values tend to be close to the expected value. A 0.4% standard deviation in algorithm probability is too high for Considering that the preferred direction of the algorithmically incentivized order has to be a specific direction (Long/Short) before entering a trade and that your order will be , You are thus forced to "hold a high-frequency trade until profit", which is oxymoronic.

This is heavily simplified. There is a magnitude of working HFT bots inside the crypto ecosystem, which are fundamentally different in structure and maturity compared to HFT algorithms in traditional finance markets like the Forex Exchange. Not every HFT algorithm is competing against each other, some are holding positions for hours, some for minutes, some for seconds, and some for microseconds. Most HFT algorithms open trades based on different parameters, indicators, or aggregated orderflows.

Probability doesn't equal simulated outcome.

If HFT just traded with probabilities, they would all be profitable in a simulated environment. When ShezheaNET (back: MicroAI) used to work on the Rechart AI environment (a simulated Trading Environment for $NQ), we realized that basing executions solely off past TEs (Trading Environments, as seen in -1 Modeling), can not be profitable. Now a HFT firm has the choice to weigh recent data over old data and thus create a prediction model for Market Maker weekly/monthly behavior.

An example of this would be: CHF/USD is trading in Direction A, Forex News gets released, and after 4 hours, Company 1 requests an order to sell CHF for USD to Market Making firms. What has been the usual behavior of these firms and how can we predict it with what probability to correctly let our algorithms behave in a certain (profitable) way? This has been done over the past and recent data aggregation models have increased in popularity. View it as an alternative way of our perspective on TEs. We don't use aggregated recent date weighted models and are thus taking a step back from what the new norm is starting to become. Why? Profitability in MLT HFT academia is usually measured in win rates in contrast to average PnL. Academia is starting to go single-directional. Newer academia in the MLT HFT realm is assuming that older academia was backtested in not only live environments but also in live executions. Usually, both are not the case.


Last updated