Hualin Luan Cloud Native · Quant Trading · AI Engineering
Back to articles

Article

Quantitative trading system development record (1): five key decisions in project startup and architecture design

Taking Micang Trader as an example, this article starts from system boundaries, data flow, trading-session ownership, unified backtesting/live-trading interfaces, and AI collaboration boundaries to establish the architecture thread for the quantitative trading system series.

Meta

Published

3/26/2026

Category

guide

Reading Time

24 min read

Readers can treat this article as the system entry point for the whole series: first clarify the boundaries, data semantics, trading-session ownership, backtesting/live-trading interfaces, and AI collaboration boundaries of a quantitative trading desktop system, then move into the later articles on defects, testing, performance, refactoring, and AI engineering.

This is not a beginner tutorial on “how to write a strategy quickly.” The first questions are more basic: where market data enters, who owns trading-time semantics, where layered K-line generation belongs, whether the strategy layer can serve both backtesting and live trading, and how AI-generated code is constrained by architectural boundaries. If these questions are not answered first, later implementation work will easily become a pile of local patches.

Who should read it

The early risk in a quantitative trading system is often not “missing a tool.” It is handing too many problems to strategy code too early. Once market data governance, trading sessions, indicator calculation, signal decisions, backtest inputs, live feedback, and UI refreshes are mixed together, the system may look flexible in the short term, but it becomes hard to explain, reproduce, and maintain.

This article focuses on the engineering judgment behind those problems: whether data semantics should be centrally managed, whether strategies should only consume stable results, whether responsibilities are merged simply because implementation is convenient, and whether interface design can support both backtesting and live trading.

Therefore, this article does not start from a technology stack checklist. It starts from system boundaries. Later discussions about defect review, testing strategy, performance governance, architecture refactoring, and AI engineering all follow the same main line: stabilize the boundaries first, then let the implementation evolve.

Series reading order

It is recommended to read Part1 -> Part2 -> Part3 -> Part4 -> Part5 -> Part6 -> Part7. Part1 establishes the system boundary. Part2 and Part3 expose real Python engineering defect families. Part4 turns defects into testing defenses. Part5 handles performance governance. Part6 reviews architecture evolution. Part7 finally discusses how AI engineering strengthens the entire delivery loop.

Reading Roadmap of Seven Articles on the Development of Quantitative Trading System
Figure 1: Seven-part roadmap from architecture entry through risk references, testing, performance, architecture evolution, and AI engineering.

This reading order is not just a numerical sequence. It is a progression in cognitive load. You need system boundaries first to understand why small Python issues can be amplified inside a trading system. You need real defects first so that the testing article does not collapse into an abstract testing pyramid. You need test and performance evidence first so that boundary movement in the refactoring article has a basis. Finally, AI engineering makes sense only after you can distinguish which work can be delegated to AI and which decisions must remain with humans.

If you only want to quickly judge whether this series is relevant, read the system overview in this article and the architecture evolution diagram in Part6 first. If you are fixing a specific bug, you can first search Part2 and Part3 by risk family; after that, still return to Part1 to decide whether the bug belongs to data access, domain model, strategy layer, execution layer, backtesting layer, or presentation layer.

System entrance: define boundaries before discussing implementation

Micang Trader can be understood as a modular and extensible real-time quantitative trading system for layered signal assessment. It is customized on top of vn.py, inherits vn.py’s modular architecture advantages, can connect to multiple market data sources, can deeply customize data processing for specific markets when needed, and displays market data, indicators, strategy state, and runtime results through a desktop UI.

One common engineering risk at project startup is treating “getting the strategy to run first” as a substitute for system design. In the short term, strategy code can indeed create fast feedback. But if data access, period conversion, trading-session ownership, indicator calculation, chart rendering, and order execution do not have clear boundaries, they can gradually get squeezed into the same class or the same call chain. When backtest results and live behavior diverge, problem localization becomes difficult: it is no longer easy to tell whether the deviation comes from data, indicators, strategy, execution, or UI.

A more reliable starting point is to split the system into boundaries that can be reasoned about:

  • The market data entry receives external data only and does not decide trading-day semantics.
  • The trading-session model maps physical time to trading day, day session, night session, and forced-close points.
  • The layered aggregation service generates higher-period events from lower-period data and keeps aggregation logic unique.
  • The strategy core consumes stable events and indicator views, and does not query the GUI or database backward.
  • The execution layer handles orders, matching, gateways, and live-trading differences without polluting the strategy interface.
  • Both backtest and live adapters convert into the same event protocol.
  • Desktop monitoring only presents state, market data, and diagnostics; it does not own data semantics.
Micang Trader quantitative trading system panoramic architecture diagram
Figure 2: Single-process event-driven collaboration topology where modules attach to the vn.py Event Engine through publish / subscribe contracts.

This system overview answers: “which modules must not couple directly, and how must they collaborate through the event engine and event contracts?” In a single-process event-driven architecture, market data entry, data governance, K-line handling, strategies, risk/execution, backtesting, and desktop monitoring should not read each other’s internal state. They should publish and subscribe to standard events through the vn.py event engine. Otherwise, once the strategy layer directly reads the database, directly controls the UI, or directly decides natural-day splits, any business-rule change will penetrate multiple modules. For example, a night-session rule change affects K-line generation, which affects indicators, which affects signals, which eventually affects orders. When boundaries are unclear, any fix on that chain may introduce new inconsistencies. The value of the event engine is to constrain those changes inside stable event contracts.

Therefore, the first step in architecture design is not choosing more tools. It is clarifying system boundaries. Trading access, data governance, strategy decisions, execution feedback, backtesting validation, and interface presentation should each carry clear responsibilities. Tools are only implementation means. What really determines maintainability is whether business semantics are stable, whether data quality is trustworthy, and whether module boundaries are clear enough.

Decision 1: vn.py is not a black box; data semantics must move out

If you are building a trading system with vn.py, the biggest risk is usually not the framework itself. The risk is “putting everything into CtaTemplate.” This is fast at the beginning, but as the project grows, the strategy class takes on too many responsibilities at once: reading data, repairing data, deciding the trading day, calculating signals, and sending orders. The code can run, but it is hard to test and hard to debug.

A simple criterion helps: is the strategy class making trading decisions, or is it also acting as a data engineering layer? If the strategy is full of data queries, data repairs, and time-ownership decisions, every future rule change will ripple through the system.

A more stable boundary is: vn.py continues to provide events and trading interfaces, while data rules are centrally managed by an independent data service. The strategy reads stable inputs and makes buy/sell decisions. It does not directly handle low-level data details.

# illustrative code, not production code
# Not recommended: the strategy directly handles data details
class BadStrategy(CtaTemplate):
    def on_tick(self, tick: TickData):
        rows = self.db.load_recent_ticks(tick.vt_symbol, limit=500)
        clean_rows = fill_missing_values(rows)
        trading_day = resolve_trading_day(tick.datetime)
        signal = calc_signal(clean_rows, trading_day)
        if signal == "buy":
            self.buy(tick.ask_price_1, 1)

# Recommended: the data service only provides a consistent data view;
# the strategy computes the signal.
data_service = MarketDataService(symbol="HSI")

class GoodStrategy(CtaTemplate):
    def __init__(self, data_service: MarketDataService):
        super().__init__()
        self.data_service = data_service

    def on_bar(self, bar: BarData):
        view = self.data_service.get_view(bar.vt_symbol)
        if not view.ready:
            return

        signal = calc_signal(view.indicators, view.trading_day)
        if signal == "buy":
            self.buy(bar.close_price, 1)

strategy = GoodStrategy(data_service)

The important point is this: a consistent view is responsible for keeping inputs consistent at the same point in time; it does not output trading signals. Signals are always calculated by the strategy layer according to business rules. The cost is that you write one more layer of infrastructure early. The benefit is direct: strategy code is shorter and closer to business rules; data rules are maintained once rather than forked across strategies; tests can validate the data service first and then validate strategy decisions, making problem localization faster.

You can use three signals to tell whether the project is getting out of control: whether the strategy class directly writes SQL or reads files, whether it handles trading-day and time ownership by itself, and whether the same data-cleaning logic appears in multiple strategies. Once these signals appear, boundary separation is usually more effective than adding more strategy parameters.

Decision 2: data is the foundation; aggregation rules must meet commercial software standards

All upper-layer capabilities in a quantitative trading system ultimately depend on data quality. Strategies, indicators, backtesting, risk control, and chart display look like different modules, but they all depend on the same question: can the system process market data in a stable, explainable, and auditable way? If data entry, trading calendars, trading sessions, missing values, aggregation windows, and anomaly repair rules are not defined first, later strategy optimization can become precise computation on wrong data.

The most underestimated point here is the engineering standard for data aggregation rules. It should not be just a resample call, and it should not be scattered across strategies, charts, and backtest scripts. A commercial-software-grade implementation must answer several questions clearly: how trading calendars are maintained for different markets, how day sessions, night sessions, half days, holidays, and temporary closures are handled, how window boundaries close, how missing market data is marked, how abnormal ticks are filtered, whether filled data is allowed into the live chain, and how all rules are tested and audited.

If these rules are written into strategies by experience, the system may run in the short term but will eventually fork implicitly. One strategy generates 30-minute K-lines by itself, another chart component directly reads 1-hour K-lines, and the backtest script uses another Pandas path for historical data. All three results may “look reasonable,” but if trading sessions, window ownership, or missing-value rules differ slightly, indicators and signals quietly drift around boundary points.

# illustrative code, not production code
# Wrong: data aggregation rules are scattered across strategies, charts, and backtest scripts
strategy_30m = strategy.aggregate_30m(min_bars)
chart_1h = chart.load_hour_bars(symbol)
backtest_daily = history.resample("1D")

# Recommended: trading calendar, sessions, and aggregation rules
# are signed off by the data service
data_service = MarketDataService(
    calendar=ExchangeCalendar("HKFE"),
    session_model=TradingSessionModel("HKFE"),
    aggregation_policy=AggregationPolicy(
        window_boundary="left-closed-right-open",
        missing_value="mark_and_skip",
        outlier_filter="exchange_rule",
    ),
)
stable_bar = data_service.publish_stable_bar(raw_tick)

The more reliable approach is to treat data handling as system infrastructure, not as a helper function for strategies. After raw market data enters the system, it first goes through unified data cleaning, trading calendars, trading sessions, window ownership, and aggregation rules, then produces stable data events. The strategy layer should not care how those data points are filled, aggregated, or handled on special trading days. It should only consume stable results signed off by the data service.

Hong Kong futures trading session attribution model chart
Figure 3: Trading-session ownership diagram maps physical time to trading-day semantics and avoids contaminating indicators with natural-day splits.

This diagram answers the time-ownership problem that is easiest to get wrong in data rules. What a strategy really needs is not a bare timestamp, but signed-off business semantics: timestamp + trading_day + session_phase + force_close_at. timestamp records facts, trading_day owns business attribution, session_phase owns session semantics, and force_close_at owns the risk boundary. Once this semantic layer is fixed, later indicators and signals will not be quietly polluted by natural-day splits.

This investment can feel slow early because it requires more infrastructure code, more test cases, and more boundary rules. But it is the place where a quantitative trading system deserves the most investment. The more reliable the data layer is, the simpler the strategy layer becomes. The more centralized the aggregation rules are, the easier it is to keep backtesting and live trading consistent. The clearer the market rules are, the less likely the system is to lose control when it later supports more instruments, markets, and sessions.

Decision 3: fully use the event engine and keep business event-driven

One of vn.py’s core values is its event engine. A maintainable quantitative system should not let modules directly call each other, query each other’s state, or splice each other’s data. Business flow should revolve around events. Market data arrival, K-line generation, order submission, trade reports, position changes, risk triggers, and UI updates should all enter the system as explicit events rather than through implicit shared state.

The meaning of event-driven design is not just “sending messages.” It makes system boundaries clearer. The data service only publishes stable market events. The strategy engine only subscribes to the market events it needs. The risk module only cares about order and position events. The execution module only handles order requests and reports. The UI only subscribes to state updates for display. Each module remains highly cohesive and handles its own responsibility. Modules collaborate through event protocols instead of depending directly on each other’s internals.

If the system does not fully use the event engine, it easily degrades into a tightly coupled structure. Strategies query the database directly, charts call data aggregation functions directly, risk control reads strategy internal variables directly, and execution modules mutate strategy state backward. The code is shorter in the short term, but every later change touches multiple modules. Debugging also becomes hard: when a signal is abnormal, is it a data issue, a strategy issue, a risk issue, or a UI refresh issue?

# illustrative code, not production code
# Recommended: business modules collaborate through event protocols
# rather than directly calling each other's internal state.
event_engine.register(EventType.STABLE_BAR, strategy_engine.on_bar)
event_engine.register(EventType.SIGNAL, risk_engine.on_signal)
event_engine.register(EventType.ORDER_REQUEST, execution_engine.on_order_request)
event_engine.register(EventType.TRADE, portfolio_engine.on_trade)

event_engine.put(EventType.STABLE_BAR, stable_bar_event)

A more stable structure is: the base data service publishes market events, the aggregation service publishes stable K-line events, the strategy engine consumes market events and publishes signal events, the risk module consumes signal and account events and publishes executable instructions, and the trading adapter converts instructions into gateway calls before publishing order and trade reports back as events. Business can only move forward along the event flow; it cannot bypass the event engine to mutate other modules’ state.

Event collaboration path diagram
Figure 4: Event collaboration path where the data service, strategy, risk, and execution modules move forward through standard events.

This diagram answers how business moves forward under event-driven design. After market events enter the data service, they first form a consistent data view and then enter strategy evaluation. The strategy outputs signals, risk control outputs executable instructions, execution outputs order and trade reports, and monitoring only subscribes to state changes. The point is not which module calls which module first. The point is that all collaboration must pass through event contracts.

This principle makes the system easier to test and extend. Tests can construct event inputs directly and verify whether module output events are correct. Debugging can follow each step along the event chain. Adding new strategies, risk rules, or UI views only requires subscribing to stable events instead of invading existing modules. For a long-lived trading system, event-driven design is not a style choice. It is a basic constraint for loose coupling, high cohesion, and auditability.

Decision 4: backtesting and live trading must share the strategy event protocol

One of the biggest maintenance burdens in quantitative systems is backtest/live logic bifurcation. Typical symptoms include: the backtest passes but live trading fails; live issues cannot be reproduced in backtests; the same strategy uses different indicator implementations in backtesting and live trading; backtest and live parameters gradually drift across two configuration entrances.

The root cause is usually not a single wrong function, but a different data-flow model. Backtesting is often a pull model: the strategy actively iterates historical data. Live trading is usually a push model: the exchange or gateway pushes ticks, bars, orders, and trades. If strategy code adapts to these two models separately, two state machines appear. As the system grows, it becomes hard to tell whether a strategy performance difference comes from the market environment or from different backtest/live code paths.

A more reliable abstraction is a unified event stream. The backtest engine converts historical data into an event stream, and the live engine converts real-time data into an event stream. The strategy layer only implements the same on_market_event(event) or on_bar(event) interface. Differences between backtesting and live trading stay in adapters instead of entering the strategy core.

# illustrative code, not production code
class UnifiedStrategy:
    def on_bar(self, event: BarEvent):
        # Whether the event comes from backtesting or live trading,
        # the strategy only sees the unified protocol.
        pass

backtest_engine = BacktestEngine(strategy)
backtest_engine.run(historical_events)

live_engine = LiveEngine(strategy)
live_engine.run(live_event_feed)
Unified strategy interface architecture diagram for backtesting and live trading
Figure 5: Unified backtest/live interface diagram. Differences stay in adapters, and the strategy layer only faces the unified event protocol.

This diagram answers how to avoid writing the same strategy twice. Historical event streams and live event streams both enter UnifiedStrategy, and strategy outputs are then handed to simulated matching or real order submission. The strategy layer does not need to know whether it is in backtesting or live trading. It only responds to standardized market events, order events, and state events.

The unified event protocol also affects indicator implementation. Early systems often use Pandas vectorized calculation in backtesting and incremental loops in live trading. This makes backtests faster to write, but it can create differences in floating-point precision, window closing, missing data, and initialization state. A safer principle is: backtesting can use batch input to drive an event stream, but strategy and indicator calculation should share the same core implementation as much as possible. Even a slower backtest is safer than long-term drift between two logic paths.

Decision 5: AI can accelerate implementation, but cannot replace architecture sign-off

AI coding tools are good at generating boilerplate code, drafting tests, explaining complex functions, extracting repeated logic, and doing local refactoring. For quantitative trading systems, those capabilities are useful because the system contains many data classes, configuration parsers, event converters, logs, boundary tests, and repetitive glue code.

But AI should not directly own architectural decisions. It does not know why a trading-session rule matters or what consequences an indicator deviation can have in live trading. It may propose a generic order-state management design while ignoring exchange-specific report differences. It may propose a concise Pandas implementation while ignoring real-time latency and memory pressure. It may split code beautifully while providing no evidence that backtesting and live trading are consistent.

A better human-AI collaboration model is: humans define architecture boundaries, interface contracts, and acceptance evidence first, then AI generates candidate implementations inside those boundaries. For example, do not ask “how to handle order reports.” Give clear constraints instead: “Based on vn.py OrderData and TradeData, implement OrderStateTracker; state transitions are only allowed as SUBMITTING -> NOTTRADED -> PARTTRADED -> ALLTRADED/CANCELLED/REJECTED; duplicate reports must be idempotent; the tracker must recover consistency through report replay after network jitter; events must include order_id, source_ts, and state_version; tests must cover disconnection/reconnection, duplicate reports, and out-of-order reports.”

The point of this prompting style is not to reduce AI’s freedom. It is to place freedom in implementation details while keeping system semantics in interfaces and tests. AI can generate a first draft of OrderStateTracker, add parameter validation and boundary tests, and refactor repeated code. It cannot replace humans in judging whether state transitions match trading rules, whether backtesting and live trading truly share the same protocol, or whether performance optimization changes the timing semantics of orders and reports.

K-line compression is another typical example. Faced with a goal of compressing historical K-line storage volume by 80%, AI will likely prefer the shortest Pandas DataFrame compression implementation. That may be enough for offline analysis, but a trading terminal must also consider read speed, random access, memory usage, cross-process reuse, and replay latency. Whether the final choice is a custom binary format, NumPy arrays, memory mapping, or another option must be decided from actual I/O patterns and benchmarks, not from code length.

Therefore, the role of AI-assisted development in this series is clear: AI is responsible for candidate implementations, while humans own architecture boundaries, business semantics, risk judgment, and final sign-off. Part7 will continue with specification-driven development, agile delivery workflow, human-AI quality gates, and evidence loops. In Part1, remember one principle first: the faster AI can generate code, the more clearly non-negotiable boundaries must be written.

Test strategy: protect semantics first, then extend implementation

Testing a quantitative system cannot only ask whether the strategy makes money. Returns are affected by market environment, parameter selection, fees, slippage, sample interval, and execution quality. They cannot be equated directly with code correctness. More basic questions are: whether data conversion is correct, whether trading-day ownership is correct, whether higher-period aggregation is correct, whether indicator state is correct, and whether backtesting and live trading follow the same event protocol.

Testing can be understood in four layers. The first layer is unit tests for data conversion, trading-session ownership, indicator calculation, and utility functions. These inputs and outputs are deterministic and should be asserted clearly. The second layer is integration tests covering key paths from raw data to strategy signals, from signals to orders, and from network exceptions to degraded recovery. The third layer is backtest verification, using historical intervals to check whether signal triggers, window ownership, and indicator results match expectations. The fourth layer is paper-trading or sandbox operation, verifying that long-running systems do not lose data, disorder orders, lose logs, or hide exceptions.

Cross-trading-day data normalization is an example worth testing early. Suppose Friday night trading continues into early Saturday morning, and Monday morning reopens. The 30-minute line start time should not be polluted by natural-day splitting. The test does not merely verify that PeriodAggregator can produce a result. It verifies that the result is generated under trading-session model constraints.

# test case: cross-trading-day data normalization
# illustrative code, not production code
def test_cross_day_aggregation():
    friday_night = create_bars("2024-01-05 23:00", "2024-01-06 03:00")
    monday_morning = create_bars("2024-01-08 09:15", "2024-01-08 10:00")

    aggregator = PeriodAggregator(period="30m")
    for bar in friday_night + monday_morning:
        result = aggregator.update(bar)
        if result:
            assert result.start_time == "2024-01-08 09:15"

The architectural meaning behind this test matters more than the code itself. It shows that the trading-session model must exist before aggregation logic, that aggregation logic must be independently callable, and that tests must bypass the GUI and real gateways. If a system cannot write this kind of test, its domain boundaries usually have not been separated yet.

Pre-launch checklist: five questions must have clear answers

Before starting the project, use the following five questions to check whether the architecture is clear enough.

1. Is the data layer independent?

  • Are data collection, storage, aggregation, and services separated into modules?
  • Does the strategy obtain events and indicators only through standard interfaces?
  • Can the data source be changed without modifying the strategy core?

2. How is layering managed?

  • Is period conversion logic centralized in one place?
  • Are different periods normalized by a unified session model?
  • Can a new period be added without copying aggregation logic into strategies?

3. Are backtesting and live trading unified?

  • Does strategy code share the same core interface in backtesting and live trading?
  • Does the backtest engine simulate key live differences such as latency, slippage, and matching?
  • Can live issues be reproduced in backtesting or testing environments through event replay?

4. Where is the boundary of AI assistance?

  • Which code can be delegated to AI for candidate implementations?
  • Which interfaces, trading semantics, and risk judgments must be manually signed off?
  • Must AI-generated code pass tests, review, and benchmark evidence?

5. What is the testing strategy?

  • Which deterministic domain logic is covered by unit tests?
  • Which key data flows and exception flows are covered by integration tests?
  • Which historical intervals and manual reference results are used for backtest verification?
  • How long must paper trading or sandbox operation run before entering the next phase?

The value of this checklist is to change project startup from “write it first and see” into “reduce irreversible risk first.” Quantitative trading systems rarely fail because one tool is missing. More often, they fail because data semantics, time semantics, strategy interfaces, and testing evidence were not fixed early.

Summary: architecture is for the human brain to manage complexity

The AI era makes it easy to form a dangerous illusion: since code generation is faster, early architecture can be more casual. Quantitative trading systems are the opposite. The easier code is to generate quickly, the more clearly the boundaries that cannot be crossed must be defined.

Clear architecture is not about making the directory structure look advanced. It is about letting people quickly locate the source of real problems: data access, trading sessions, layered aggregation, strategy event protocol, execution adaptation, or UI display. The faster the problem can be located, the less likely the fix is to introduce new inconsistencies.

The later articles follow this main line. Part2 and Part3 classify Python engineering risks by real bugfixes. Part4 turns defects into testing defenses. Part5 discusses performance investigation, benchmarks, and optimization rollback. Part6 reviews architecture evolution, technical debt, and ADR. Part7 discusses how AI engineering organizes specification, implementation, review, and acceptance into a closed loop.

If you remember only one startup principle, remember this: a trading system should not start from strategy code. It should start from boundaries. Stabilize data semantics, time semantics, event protocols, and verification evidence first, then let strategies evolve inside those stable boundaries.

Reference resources

  • vn.py official documentation: https://www.vnpy.com
  • AI development process used by the project: speckit (specification-driven development)
  • Agile delivery workflow used by the project: BMAD (Breakthrough Method for Agile AI Driven Development)

Series context

You are reading: Quantitative trading system development record

This is article 1 of 7. Reading progress is stored only in this browser so the full series page can resume from the right entry.

View full series →

Series Path

Current series chapters

Chapter clicks store reading progress only in this browser so the series page can resume from the right entry.

7 chapters
  1. Part 1 Current Quantitative trading system development record (1): five key decisions in project startup and architecture design Taking Micang Trader as an example, this article starts from system boundaries, data flow, trading-session ownership, unified backtesting/live-trading interfaces, and AI collaboration boundaries to establish the architecture thread for the quantitative trading system series.
  2. Part 2 Quantitative trading system development record (2): Python Pitfalls practical pitfall avoidance guide (1) Reorganize Python traps from a long list into an engineering risk reference for quantitative trading systems: how to amplify the three types of risks, syntax and scope, type and state, concurrency and state, into real trading system problems.
  3. Part 3 Record of Quantitative Trading System Development (Part 3): Python Pitfalls Practical Pitfalls Avoidance Guide (Part 2) Continuing to reorganize Python risks into a reference piece: how GUI lifecycles, asynchronous network failures, security boundaries, and deployment infrastructure affect the long-term stability of quantitative trading systems.
  4. Part 4 Quantitative trading system development record (4): test-driven agile development (AI Agent assistance) Starting from a cross-night trading day boundary bug, we reconstruct the test defense line of the quantitative trading system: defect-oriented testing pyramid, AI TDD division of labor, boundary time, data lineage and CI Gate.
  5. Part 5 Quantitative trading system development record (5): Python performance tuning practice Transform performance optimization from empirical guesswork into a verifiable investigation process: start from the 3-second chart delay, locate the real bottleneck, compare optimization solutions, and establish benchmarks and rollback strategies.
  6. Part 6 Record of Quantitative Trading System Development (6): Architecture Evolution and Reconstruction Decisions Review the five refactorings of Micang Trader, explaining how the system evolved from the initial snapshot to a clearer target architecture, and incorporated technical debt and ADR decisions into long-term governance.
  7. Part 7 Quantitative trading system development record (7): AI engineering implementation - from speckit to BMAD Taking the trading calendar and daily aggregation requirements as a single case, explain how AI engineering can enter the delivery of real quantitative systems through specification drive, BMAD role handover and manual quality gate control.

Reading path

Continue along this topic path

Follow the recommended order for Quantitative system development practice instead of jumping through random articles in the same topic.

View full topic path →

Next step

Go deeper into this topic

If this article is useful, continue from the topic page or subscribe to follow later updates.

Return to topic Subscribe via RSS

RSS Subscribe

Subscribe to updates

Follow new articles in an RSS reader without checking the site manually.

Recommended readers include Follow , Feedly or Inoreader and other RSS readers.

Comments and discussion

Sign in with GitHub to join the discussion. Comments are synced to GitHub Discussions

Loading comments...