Back to Code and Software Demo

PyTorch – Neural Model for Sequential Financial Feature Processing

Goal

The objective of this module is to demonstrate a clean, production-aligned inference pipeline for an AI-driven trading signal generator. The goal is to structure a neural model capable of processing sequential market features and outputting three key metrics expected in a quantitative decision system: directional probabilities (Short / Neutral / Long), a confidence score to scale exposure, and a risk estimate to constrain position sizing. This positions the code as a foundation for a future learning-based trading engine while remaining lightweight and deterministic for demonstration and integration purposes.

Engineering Approach and Tools

The implementation uses PyTorch to define a GRU-based neural network that processes sequences of synthetic feature vectors. The model architecture includes an input encoder layer, a recurrent GRU block for temporal pattern extraction, and three dedicated output heads. Each head produces a specific signal component with appropriate activation functions: Softmax for direction probabilities, Sigmoid for confidence scaling, and Softplus for positive risk estimation. Data is generated using random tensors to simulate a batch of market sequences, and the model is executed in inference mode without parameter updates. The inference output is then packaged and the model is exported to ONNX, enabling compatibility with deployment environments and real-time execution engines.

Execution Behavior and Output Interpretation

The execution produces structured outputs for a batch of five simulated sequences. The direction probabilities tensor shows a balanced distribution across Short, Neutral, and Long states, confirming consistent Softmax normalization. The confidence values remain close to 0.5, indicating neutral conviction due to the absence of trained weights. The risk estimate output returns positive scalar values around 0.59–0.60, as enforced by the Softplus activation. These outputs confirm that the architecture is functioning correctly, forward propagation is valid, and the model produces coherent tensors ready for downstream consumption in a trading pipeline or ONNX runtime environment.

Sequence Short Neutral Long Confidence Risk
10.32490.35470.32050.51250.5938
20.33190.32720.34090.49990.5942
30.32050.35380.32570.47230.6026
40.30930.33430.35640.49210.6068
50.35480.32580.31950.46380.5896

Code

# Author: Hamza Bendahmane # trading_ai_demo_clean.py import torch import torch.nn as nn import torch.nn.functional as F import os # ----------------------------- # AI Model # ----------------------------- class QuantTradingAI(nn.Module): """ Deep learning model for quantitative trading. Features: - Sequence encoding via GRU - Multi-head outputs: direction, confidence, risk """ def __init__(self, feature_dim=256, hidden_dim=128, seq_len=60, num_assets=4): super().__init__() self.seq_len = seq_len self.feature_dim = feature_dim self.hidden_dim = hidden_dim self.num_assets = num_assets # Input encoding self.input_encoder = nn.Linear(feature_dim, hidden_dim) # GRU sequence model self.gru = nn.GRU( input_size=hidden_dim, hidden_size=hidden_dim, num_layers=1, batch_first=True ) # Multi-head outputs self.direction_head = nn.Linear(hidden_dim, 3) self.confidence_head = nn.Linear(hidden_dim, 1) self.risk_head = nn.Linear(hidden_dim, 1) def forward(self, seq_features): x = F.relu(self.input_encoder(seq_features)) gru_out, _ = self.gru(x) last_hidden = gru_out[:, -1, :] direction_logits = self.direction_head(last_hidden) direction_probs = F.softmax(direction_logits, dim=-1) confidence = torch.sigmoid(self.confidence_head(last_hidden)) risk = F.softplus(self.risk_head(last_hidden)) return { "direction_probs": direction_probs, "confidence": confidence, "risk": risk } # ----------------------------- # Loss # ----------------------------- class TradingLoss(nn.Module): def __init__(self, risk_penalty=0.1): super().__init__() self.risk_penalty = risk_penalty def forward(self, predictions, targets, portfolio_returns): direction_loss = F.cross_entropy(predictions['direction_probs'], targets) risk_adjusted_loss = - (portfolio_returns.mean() / (predictions['risk'] + 1e-6)) return direction_loss + self.risk_penalty * risk_adjusted_loss # ----------------------------- # ONNX Export # ----------------------------- def export_onnx(model, export_path="trading_ai.onnx", seq_len=60, feature_dim=256): """ Export model to ONNX with fixed shapes for clean export. """ model.eval() dummy_input = torch.randn(1, seq_len, feature_dim) torch.onnx.export( model, dummy_input, export_path, input_names=["seq_features"], output_names=["direction_probs", "confidence", "risk"], opset_version=18, dynamic_axes=None, verbose=False ) print(f"[ONNX] Clean export → {export_path}") # ----------------------------- # Demo # ----------------------------- def demo_run(): model = QuantTradingAI() model.eval() batch_size = 5 seq_len = 60 feature_dim = 256 seq_features = torch.randn(batch_size, seq_len, feature_dim) outputs = model(seq_features) print("[Demo] Trading AI outputs:") print("Direction probabilities:\n", outputs['direction_probs']) print("Confidence:\n", outputs['confidence']) print("Risk estimate:\n", outputs['risk']) export_onnx(model) if __name__ == "__main__": demo_run()
© 2025 – Hamza Bendahmane. All rights reserved.