<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Hualin Luan RSS Feed</title><description>A personal technical knowledge base covering backend engineering, distributed systems, Java, Python, and applied AI engineering.</description><link>https://milome.github.io/</link><language>en</language><item><title>Record of Quantitative Trading System Development (6): Architecture Evolution and Reconstruction Decisions</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part7/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part7/</guid><description>Review the five refactorings of Micang Trader, explaining how the system evolved from the initial snapshot to a clearer target architecture, and incorporated technical debt and ADR decisions into long-term governance.</description><pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>architecture</category><category>refactoring</category><category>technical-debt</category><category>decision-making</category><category>quant-trading</category></item><item><title>Quantitative trading system development record (4): test-driven agile development (AI Agent assistance)</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part6/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part6/</guid><description>Starting from a cross-night trading day boundary bug, we reconstruct the test defense line of the quantitative trading system: defect-oriented testing pyramid, AI TDD division of labor, boundary time, data lineage and CI Gate.</description><pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>tdd</category><category>testing</category><category>ai-development</category><category>pytest</category><category>quant-trading</category></item><item><title>Quantitative trading system development record (5): Python performance tuning practice</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part5/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part5/</guid><description>Transform performance optimization from empirical guesswork into a verifiable investigation process: start from the 3-second chart delay, locate the real bottleneck, compare optimization solutions, and establish benchmarks and rollback strategies.</description><pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>python</category><category>performance</category><category>optimization</category><category>profiling</category><category>numba</category><category>multiprocessing</category><category>vectorization</category></item><item><title>Quantitative trading system development record (7): AI engineering implementation - from speckit to BMAD</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part4/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part4/</guid><description>Taking the trading calendar and daily aggregation requirements as a single case, explain how AI engineering can enter the delivery of real quantitative systems through specification drive, BMAD role handover and manual quality gate control.</description><pubDate>Sat, 28 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>ai-engineering</category><category>speckit</category><category>bmad</category><category>agent-systems</category><category>development-workflow</category><category>prompt-engineering</category></item><item><title>Record of Quantitative Trading System Development (Part 3): Python Pitfalls Practical Pitfalls Avoidance Guide (Part 2)</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part3/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part3/</guid><description>Continuing to reorganize Python risks into a reference piece: how GUI lifecycles, asynchronous network failures, security boundaries, and deployment infrastructure affect the long-term stability of quantitative trading systems.</description><pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>python</category><category>pitfalls</category><category>qt</category><category>concurrency</category><category>security</category><category>quant-trading</category></item><item><title>Quantitative trading system development record (2): Python Pitfalls practical pitfall avoidance guide (1)</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part2/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part2/</guid><description>Reorganize Python traps from a long list into an engineering risk reference for quantitative trading systems: how to amplify the three types of risks, syntax and scope, type and state, concurrency and state, into real trading system problems.</description><pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>python</category><category>pitfalls</category><category>quant-trading</category><category>debugging</category><category>best-practices</category></item><item><title>Quantitative trading system development record (1): five key decisions in project startup and architecture design</title><link>https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part1/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-quant-trading/quant-trading-dev-series-part1/</guid><description>Taking Micang Trader as an example, this article starts from system boundaries, data flow, trading-session ownership, unified backtesting/live-trading interfaces, and AI collaboration boundaries to establish the architecture thread for the quantitative trading system series.</description><pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>quant-trading</category><category>vnpy</category><category>architecture</category><category>python</category><category>ai-development</category></item><item><title>From enterprise-level CF platform to cloud native (1): Architect&apos;s review - the gains and losses of microservice governance in the era of enterprise-level CF platform</title><link>https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part1/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part1/</guid><description>Based on the front-line architecture practice of enterprise-level CF platforms from 2015 to 2020 and industry observations from 2015 to 2026 (to date), we review the microservice governance design decisions in the Cloud Foundry era and analyze which ones have withstood the test of time and which ones have been reconstructed by the cloud native wave.</description><pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>microservices</category><category>cloud-foundry</category><category>architecture</category><category>governance</category><category>spring-cloud</category></item><item><title>From enterprise-level CF platform to cloud native (2): Observability-driven governance—from monitoring large screens to precise decision-making systems</title><link>https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part2/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part2/</guid><description>With 6 years of practical experience as an enterprise-level platform architect, we analyze the core position of observability in microservice governance, from data islands to OpenTelemetry unified standards, and build a governance system for accurate decision-making.</description><pubDate>Mon, 02 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>observability</category><category>opentelemetry</category><category>microservices</category><category>governance</category><category>monitoring</category></item><item><title>From enterprise-level CF platform to cloud native (3): The evolution of traffic management - from Spring Cloud Gateway to Gateway API and Ambient Mesh</title><link>https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part3/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part3/</guid><description>Review the practice of Spring Cloud Gateway in the enterprise-level CF platform, analyze the standardization value of Kubernetes Gateway API, explore the evolution logic from Service Mesh to Ambient Mesh, and provide a decision-making framework for enterprise traffic management selection.</description><pubDate>Tue, 03 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>microservices</category><category>traffic-management</category><category>spring-cloud-gateway</category><category>gateway-api</category><category>service-mesh</category><category>istio</category><category>ambient-mesh</category><category>cilium</category><category>kubernetes</category></item><item><title>From enterprise-level CF platform to cloud native (4): Redefining elastic fault tolerance—from Hystrix to adaptive governance</title><link>https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part4/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part4/</guid><description>Review Hystrix&apos;s historical position in microservice elastic governance, analyze Resilience4j&apos;s lightweight design philosophy, explore new paradigms of adaptive fault tolerance and chaos engineering, and provide practical guidance for enterprises to build resilient systems.</description><pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>microservices</category><category>resilience</category><category>circuit-breaker</category><category>hystrix</category><category>resilience4j</category><category>sentinel</category><category>chaos-engineering</category><category>fault-tolerance</category></item><item><title>From enterprise-level CF platform to cloud native (5): The evolution of release governance—from manual approval to progressive delivery</title><link>https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part5/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part5/</guid><description>Review the manual approval model of traditional release governance, analyze the evolution of blue-green deployment and canary release, explore the new paradigm of GitOps and progressive delivery, and provide practical guidance for enterprises to build an efficient and secure release system.</description><pubDate>Thu, 05 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>microservices</category><category>release-governance</category><category>blue-green</category><category>canary</category><category>feature-flags</category><category>gitops</category><category>progressive-delivery</category><category>argo-cd</category></item><item><title>From enterprise-level CF platform to cloud native (6): Summary—an architect’s perspective on enterprise-level microservice governance</title><link>https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part6/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-microservices-governance/microservices-governance-series-part6/</guid><description>Review the evolution of microservice governance over the past ten years from 2015 to 2026 (to date), refine the first principles of architects, summarize the implementation paths and common pitfalls of enterprise-level governance, look forward to future trends, and provide a systematic thinking framework for technical decision-makers.</description><pubDate>Fri, 06 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>microservices</category><category>governance</category><category>architecture</category><category>cloud-native</category><category>enterprise</category><category>platform-engineering</category><category>ebpf</category></item><item><title>Spring AI and LangChain4j: Enterprise Java AI Applications and AI Agent Architecture</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part6-spring-ai/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part6-spring-ai/</guid><description>A production-grade guide to Spring AI, LangChain4j, RAG, tool calling, memory, governance, observability, reliability, security, and enterprise AI operating boundaries.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>spring-ai</category><category>langchain4j</category><category>ai-engineering</category></item><item><title>Original Analysis: Why FastAPI Rises in the AI Era—The Engineering Value of Type Hints and Async I/O</title><link>https://milome.github.io/en/blog/python-memory-model-part5-fastapi-rise/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part5-fastapi-rise/</guid><description>Analyzing Python type hints, async I/O, and FastAPI&apos;s rise logic; establishing a feature-capability matching framework for LLM API service development</description><pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>fastapi</category><category>async</category><category>type-hints</category><category>pydantic</category><category>web-framework</category></item><item><title>Original Analysis: Why Python Monopolizes LLM Development—Ecosystem Flywheel and Data Evidence</title><link>https://milome.github.io/en/blog/python-memory-model-part6-python-dominance/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part6-python-dominance/</guid><description>Synthesizing multi-source data from Stack Overflow 2025, PEP 703 industry testimonies, and LangChain ecosystem to analyze the causes and flywheel effects of Python&apos;s dominance in AI</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>ai-ml</category><category>ecosystem</category><category>data-analysis</category><category>llm</category></item><item><title>Original Analysis: Capability Building for Python Developers in the AI Tools Era—A Practical Guide for Frontline Engineers</title><link>https://milome.github.io/en/blog/python-memory-model-part7-career-guide/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part7-career-guide/</guid><description>Based on Stack Overflow 2025 data, establishing a capability building roadmap from beginner to expert, providing stage assessment, priority ranking, and minimum executable solutions</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>ai-tools</category><category>career</category><category>learning-path</category><category>practical-guide</category></item><item><title>Python Memory Model Deep Dive Series Overview (7 Parts)</title><link>https://milome.github.io/en/blog/2026-05-20-python-memory-model-series-index/</link><guid isPermaLink="true">https://milome.github.io/en/blog/2026-05-20-python-memory-model-series-index/</guid><description>This page serves as the navigation hub for the Python Memory Model Deep Dive Series, providing complete entry points in reading order to establish a comprehensive cognitive framework from underlying mechanisms to engineering practice to career development.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate><category>index</category><category>python</category><category>memory-model</category><category>series-index</category><category>reading-guide</category></item><item><title>Java Memory Model Deep Dive: From Happens-Before to Safe Publication</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part1-jmm/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part1-jmm/</guid><description>A production-grade deep dive into JMM, happens-before, volatile, final fields, optimistic locking, memory barriers, cache coherence, lock semantics, HotSpot implementation, and concurrency diagnostics.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>jvm</category><category>memory-model</category><category>concurrency</category><category>volatile</category><category>synchronized</category></item><item><title>Modern Java Garbage Collection: Production Judgment, Evidence Collection, and Tuning Paths</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part2-gc/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part2-gc/</guid><description>Use symptoms, GC logs, JFR, container memory, and rollback discipline to choose and tune G1, ZGC, Shenandoah, Parallel GC, and Serial GC without cargo-cult flags.</description><pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>jvm</category><category>garbage-collection</category><category>performance</category></item><item><title>Concurrency Governance with Virtual Threads in Production Systems</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part3-loom/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part3-loom/</guid><description>Understand throughput, blocking, resource pools, downstream protection, pinning, structured concurrency, observability, and migration boundaries for Project Loom.</description><pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>loom</category><category>virtual-threads</category><category>concurrency</category></item><item><title>Valhalla and Panama: Java&apos;s Future Memory and Foreign-Interface Model</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part4-valhalla-panama/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part4-valhalla-panama/</guid><description>Separate delivered FFM API capabilities from evolving Valhalla value-type work, and reason about object layout, data locality, native interop, safety boundaries, and migration governance.</description><pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>valhalla</category><category>panama</category><category>ffm-api</category></item><item><title>Java Cloud-Native Production Guide: Runtime Images, Kubernetes, Native Image, Serverless, Supply Chain, and Rollback</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part5-cloud-native/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part5-cloud-native/</guid><description>A production-oriented Java cloud-native guide covering runtime selection, container resources, Kubernetes contracts, Native Image boundaries, Serverless, supply chain evidence, diagnostics, governance, and rollback.</description><pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>jpms</category><category>native-image</category><category>cloud-native</category></item><item><title>JIT and AOT: From Symptoms to Diagnosis to Optimization Decisions</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part7-jit-aot/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part7-jit-aot/</guid><description>A production decision guide for HotSpot, Graal, Native Image, PGO, and JVM diagnostics.</description><pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>jit</category><category>native-image</category><category>graalvm</category><category>performance</category></item><item><title>Java Ecosystem Outlook: JDK 25 LTS, JDK 26 GA, and JDK 27 EA</title><link>https://milome.github.io/en/blog/java-series/java-core-technologies-part8-ecosystem/</link><guid isPermaLink="true">https://milome.github.io/en/blog/java-series/java-core-technologies-part8-ecosystem/</guid><description>An enterprise architecture view of Java&apos;s next decade: version strategy, roadmap status, ecosystem boundaries, cloud-native operations, AI governance, and performance evolution.</description><pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate><category>guide</category><category>java</category><category>jdk</category><category>ecosystem</category><category>architecture</category></item><item><title>Original Interpretation: The Three-Layer World of Python Memory Architecture</title><link>https://milome.github.io/en/blog/python-memory-model-part1-memory-architecture/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part1-memory-architecture/</guid><description>Why doesn&apos;t memory drop after deleting large lists? Understanding the engineering trade-offs and design logic of Python&apos;s Arena-Pool-Block three-layer memory architecture</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>memory-management</category><category>cpython</category><category>performance</category></item><item><title>Original Interpretation: Python Garbage Collection - The Three Most Common Misconceptions</title><link>https://milome.github.io/en/blog/python-memory-model-part2-garbage-collection/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part2-garbage-collection/</guid><description>Deconstructing the three major misconceptions about reference counting, gc.collect(), and del statements, establishing a complete cognitive framework for Python GC mechanisms (reference counting + generational GC + cycle detection)</description><pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>garbage-collection</category><category>memory-management</category><category>performance</category></item><item><title>Original Analysis: 72 Processes vs 1 Process—How GIL Becomes a Bottleneck for AI Training and PEP 703&apos;s Breakthrough</title><link>https://milome.github.io/en/blog/python-memory-model-part3-pep703-gil/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part3-pep703-gil/</guid><description>Reviewing real production challenges at Meta AI and DeepMind, analyzing PEP 703&apos;s Biased Reference Counting (BRC) technology, and exploring the implications of Python 3.13+ nogil builds for large-scale model concurrency</description><pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>gil</category><category>pep703</category><category>concurrency</category><category>ai-ml</category></item><item><title>Original Analysis: Python as a Glue Language—How Bindings Connect Performance and Ease of Use</title><link>https://milome.github.io/en/blog/python-memory-model-part4-python-bindings/</link><guid isPermaLink="true">https://milome.github.io/en/blog/python-memory-model-part4-python-bindings/</guid><description>A comparative analysis of ctypes, CFFI, PyBind11, Cython, and PyO3/Rust, exploring the technical nature and engineering choices of Python as a glue language for large models</description><pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>python</category><category>bindings</category><category>ctypes</category><category>cython</category><category>pybind11</category><category>pyo3</category><category>rust</category><category>ffi</category></item><item><title>Why do you need to be a coding mentor for AI?</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part1-why-mentor/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part1-why-mentor/</guid><description>When AI programming assistants become standard equipment, the real competitiveness is no longer whether they can use AI, but whether they can judge, calibrate and constrain the engineering output of AI. This article starts from trust gaps, feedback protocols, evaluation standards and closed-loop capabilities to establish the core framework of &quot;Humans as Coding Mentors&quot;.</description><pubDate>Mon, 30 Mar 2026 09:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>programming-evaluation</category><category>human-ai-collaboration</category><category>original-interpretation</category></item><item><title>Panorama of AI programming ability evaluation: from HumanEval to SWE-bench, the evolution and selection of benchmarks</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part2-benchmark-landscape/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part2-benchmark-landscape/</guid><description>Public benchmarks are not a decoration for model rankings, but a measurement tool for understanding the boundaries of AI programming capabilities. This article starts from benchmarks such as HumanEval, APPS, CodeContests, SWE-bench, LiveCodeBench and Aider, and explains how to read the rankings, how to choose benchmarks, and how to convert public evaluations into the team&apos;s own Coding Mentor evaluation system.</description><pubDate>Mon, 30 Mar 2026 10:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>programming-benchmark</category><category>original-interpretation</category><category>human-eval</category><category>swe-bench</category><category>livecodebench</category><category>evaluation-framework</category></item><item><title>How to design high-quality programming questions: from question surface to evaluation contract</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part3-problem-design/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part3-problem-design/</guid><description>High-quality programming questions are not longer prompts, but assessment contracts that can stably expose the boundaries of abilities. This article starts from Bloom level, difficulty calibration, task contract, test design and question bank management to explain how to build a reproducible question system for AI Coding Mentor.</description><pubDate>Mon, 30 Mar 2026 11:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>problem-design</category><category>original-interpretation</category><category>coding-exercises</category><category>bloom-taxonomy</category></item><item><title>Four-step approach to AI capability assessment: from one test to continuous system evaluation</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part4-four-step-evaluation/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part4-four-step-evaluation/</guid><description>Serving as a coding mentor for AI is not about doing a model evaluation, but establishing an evaluation operation system that can continuously expose the boundaries of capabilities, record failure evidence, drive special improvements, and support collaborative decision-making.</description><pubDate>Mon, 30 Mar 2026 12:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>evaluation-methodology</category><category>original-interpretation</category><category>baseline-testing</category><category>continuous-assessment</category></item><item><title>Best Practices for Collaborating with AI: Task Agreement, Dialogue Control and Feedback Closed Loop</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part5-collaboration/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part5-collaboration/</guid><description>The core skill of being a Coding Mentor for AI is not to write longer prompt words, but to design task protocols, control the rhythm of conversations, identify error patterns, and precipitate the collaboration process into verifiable and reusable feedback signals.</description><pubDate>Mon, 30 Mar 2026 13:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>human-ai-collaboration</category><category>original-interpretation</category><category>prompt-engineering</category><category>feedback-design</category></item><item><title>Practical cases: feedback protocol, evaluation closed loop, code review and programming education data</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part6-case-studies/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part6-case-studies/</guid><description>Case studies should not stop at “how to use AI tools better”. This article uses four engineering scenarios: model selection evaluation, feedback protocol design, code review signal precipitation, and programming education data closed loop to explain how humans can transform the AI ​​collaboration process into evaluable, trainable, and reusable mentor signals.</description><pubDate>Mon, 30 Mar 2026 14:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>case-study</category><category>original-interpretation</category><category>feedback-protocol</category><category>evaluation-framework</category><category>human-ai-collaboration</category></item><item><title>From delivery to training: How to turn AI programming collaboration into a Coding Mentor data closed loop</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part7-building-system/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part7-building-system/</guid><description>The real organizational value of AI programming assistants is not just to increase delivery speed, but to precipitate trainable, evaluable, and reusable mentor signals in every requirement disassembly, code generation, review and revision, test verification, and online review. This article reconstructs the closed-loop framework of AI training, AI-assisted product engineering delivery, high-quality SFT data precipitation, and model evaluation.</description><pubDate>Mon, 30 Mar 2026 15:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>evaluation-system</category><category>original-interpretation</category><category>data-flywheel</category><category>ai-engineering</category><category>sft-training</category></item><item><title>From engineering practice to training data: a systematic method for automatically generating SFT data in AI engineering</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part8-sft-data-generation/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part8-sft-data-generation/</guid><description>Following the data closed loop in Part 7, this article focuses on how to process the screened engineering assets into high-quality SFT samples and connect them to a manageable, evaluable, and iterable training pipeline.</description><pubDate>Mon, 30 Mar 2026 17:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>sft-training</category><category>original-interpretation</category><category>data-generation</category><category>bmad-method</category><category>spec-driven-development</category></item><item><title>Future Outlook: Evolutionary Trends and Long-term Thinking of AI Programming Assessment</title><link>https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part9-future-outlook/</link><guid isPermaLink="true">https://milome.github.io/en/blog/series-ai-coding-mentor/ai-coding-mentor-series-part9-future-outlook/</guid><description>As the final article in the series, this article reconstructs the future route of AI Coding Mentor from the perspective of engineering decision-making: how evaluation objects evolve, how organizational capabilities are layered, and how governance boundaries are advanced.</description><pubDate>Mon, 30 Mar 2026 16:00:00 GMT</pubDate><category>interpretation</category><category>ai-coding-mentor</category><category>future-trends</category><category>original-interpretation</category><category>long-term-thinking</category><category>ai-evolution</category></item><item><title>The minimum upgrade path from blog to technology platform (1): from &apos;file pile&apos; to &apos;thematic&apos;</title><link>https://milome.github.io/en/blog/blog-to-platform-upgrade-path-part1/</link><guid isPermaLink="true">https://milome.github.io/en/blog/blog-to-platform-upgrade-path-part1/</guid><description>When you have more than 20 blog posts, readers start to get lost in time. This article shares a practical experience: why thematicization is the first step in blog upgrade, and how to judge whether you have reached the moment where you need to upgrade.</description><pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate><category>guide</category><category>blog-upgrade</category><category>content-strategy</category><category>information-architecture</category><category>astro</category><category>minimal-path</category></item><item><title>Agent Runtime does not have to be local, Colab MCP gives a more realistic direction</title><link>https://milome.github.io/en/blog/colab-mcp-shows-agent-runtime-can-live-remote/</link><guid isPermaLink="true">https://milome.github.io/en/blog/colab-mcp-shows-agent-runtime-can-live-remote/</guid><description>The value of Colab MCP is not only to run Python on the cloud, but also to turn the agent&apos;s execution environment into a notebook space that is visible, editable, and can continue to work. For many tasks, what really matters is not the remote execution itself, but how the remote artifact supports human-machine collaboration. This article is based on Google&apos;s introduction to Colab MCP Server and extends my complete understanding of runtime surface, artifact-centered design, remote workbench and visibility trust mechanism.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>mcp</category><category>colab</category><category>runtime</category><category>notebooks</category><category>google</category></item><item><title>A truly mature Eval Harness will not just focus on the answer</title><link>https://milome.github.io/en/blog/eval-harness-should-measure-process-not-just-output/</link><guid isPermaLink="true">https://milome.github.io/en/blog/eval-harness-should-measure-process-not-just-output/</guid><description>If an eval harness can only tell you the success or failure of a task, but cannot explain whether the agent called the correct capabilities, in what environment it was executed, why it failed, and why it succeeded, then what it gives is not a systematic judgment, but just a score card. This article is based on LangChain&apos;s discussion of skills eval and extends my complete understanding of artifact-based scoring, invocation metrics, trace design, workflow eval and evaluation histology.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>evals</category><category>agent-skills</category><category>langsmith</category><category>tracing</category><category>agents</category></item><item><title>The most misleading thing about Agent Benchmark is not the model score, but the infrastructure noise.</title><link>https://milome.github.io/en/blog/infra-noise-is-the-hidden-risk-in-agent-evals/</link><guid isPermaLink="true">https://milome.github.io/en/blog/infra-noise-is-the-hidden-risk-in-agent-evals/</guid><description>In agentic coding eval, the model is not the only variable. Resource headroom, kill semantics, concurrency pressure, network status, and sandbox behavior can all change task results. If these conditions are not transparent, small margins on the leaderboard are often less telling than they seem. This article is based on Anthropic&apos;s analysis of infrastructure noise and extends my complete understanding of agent benchmark interpretability, disclosure discipline, repeated experiments, and system-level evaluation perspectives.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>evals</category><category>infrastructure</category><category>benchmark</category><category>agents</category><category>anthropic</category></item><item><title>What the long-term task agent really lacks is not intelligence, but the handover, recovery and acceptance capabilities.</title><link>https://milome.github.io/en/blog/long-running-agents-need-handoffs-not-just-intelligence/</link><guid isPermaLink="true">https://milome.github.io/en/blog/long-running-agents-need-handoffs-not-just-intelligence/</guid><description>The failure of long-term task agents often does not stem from the model&apos;s inability to think, but from the system&apos;s failure to design &apos;handover, recovery, verification, and continuation&apos; as first-class citizens. This article is based on Anthropic&apos;s discussion of long-running agent harness, extending my complete views on cross-session execution, state externalization, feature contract, smoke test, browser verification and multi-round execution structure. It also explains why a truly usable agent does not run for a long time at a time, but can catch it round after round.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>agents</category><category>long-running-agents</category><category>harness</category><category>anthropic</category><category>verification</category></item><item><title>What MCP changes is not tool access, but the cost structure of Agents.</title><link>https://milome.github.io/en/blog/mcp-changes-context-economics-for-agents/</link><guid isPermaLink="true">https://milome.github.io/en/blog/mcp-changes-context-economics-for-agents/</guid><description>The real significance of MCP is not just to unify tool access, but to move a large number of intermediate processes that should be handled by the runtime out of the expensive LLM cycle. What it changes is not &apos;how many tools can be connected&apos;, but how the agent uses context, code execution and runtime control flow. This article is based on Anthropic&apos;s discussion of code execution with MCP and extends my complete understanding of direct tool-calling, progressive disclosure, runtime economics and executable skills.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>mcp</category><category>code-execution</category><category>context-engineering</category><category>agents</category><category>anthropic</category></item><item><title>Agent Harness is not a supporting role, but the most underrated main battleground of AI engineering in 2026</title><link>https://milome.github.io/en/blog/why-agent-harness-matters-2026/</link><guid isPermaLink="true">https://milome.github.io/en/blog/why-agent-harness-matters-2026/</guid><description>What really determines the upper limit of an agent is often not the model itself, but the harness organized around the model. This article is based on LangChain&apos;s disassembly of the agent harness, extending my complete understanding of file systems, code execution, context management, verification closed loops and long-term task endurance. It also explains why the focus of AI engineering competition in 2026 is shifting from &apos;model capabilities&apos; to &apos;working system design&apos;.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>agents</category><category>harness</category><category>context-engineering</category><category>ai-engineering</category><category>langchain</category></item><item><title>Overview of in-depth interpretation of OpenClaw (10 articles)</title><link>https://milome.github.io/en/blog/2026-03-24-openclaw-deep-series-index/</link><guid isPermaLink="true">https://milome.github.io/en/blog/2026-03-24-openclaw-deep-series-index/</guid><description>This page is the navigation page of the OpenClaw in-depth interpretation series, providing full access in reading order.</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><category>compatibility</category><category>openclaw</category><category>series-index</category><category>reading-guide</category></item><item><title>Original interpretation: Why do OpenClaw security incidents always happen after &apos;the risk is already known&apos;?</title><link>https://milome.github.io/en/blog/2026-03-24-openclaw-deep-01-security-nightmare-incident/</link><guid isPermaLink="true">https://milome.github.io/en/blog/2026-03-24-openclaw-deep-01-security-nightmare-incident/</guid><description>Why do OpenClaw security incidents always happen after &apos;the risk is already known&apos;? This article does not blame the model for being out of control, but instead asks about the design flaws of execution rights: when the system puts execution rights, audit rights, and rollback rights on the same link, how does organizational blindness amplify controllable deviations into accidents step by step?</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>openclaw</category><category>agent-security</category><category>incident-review</category></item><item><title>Original interpretation: Why is the lightweight Agent solution likely to be closer to production reality than the &apos;big and comprehensive&apos; solution?</title><link>https://milome.github.io/en/blog/2026-03-24-openclaw-deep-02-nanobot-contrarian/</link><guid isPermaLink="true">https://milome.github.io/en/blog/2026-03-24-openclaw-deep-02-nanobot-contrarian/</guid><description>This is not a chicken soup article praising &apos;lightweight&apos;, but an article against engineering illusion: many OpenClaw Agent stacks that appear to be stronger only front-load complexity into demonstration capabilities, but rearrange the cost into production failures and early morning duty costs.</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>openclaw</category><category>nanobot</category><category>contrarian</category></item><item><title>Original interpretation: Treat Notion as the control plane of 18 Agents. The first thing to solve is never &apos;automation&apos;</title><link>https://milome.github.io/en/blog/2026-03-24-openclaw-deep-03-notion-control-plane-operator/</link><guid isPermaLink="true">https://milome.github.io/en/blog/2026-03-24-openclaw-deep-03-notion-control-plane-operator/</guid><description>This article does not discuss whether the console interface is good-looking or not, but discusses a more fundamental production issue: when you connect 18 OpenClaw Agents to the Notion control plane, is the system amplifying team productivity, or is it amplifying scheduling noise and status chaos?</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>openclaw</category><category>multi-agent</category><category>operator-playbook</category></item><item><title>Original interpretation: Putting Agent into ESP32, the easiest thing to avoid is not the performance pit, but the boundary illusion.</title><link>https://milome.github.io/en/blog/2026-03-24-openclaw-deep-04-esp32-myth-busting/</link><guid isPermaLink="true">https://milome.github.io/en/blog/2026-03-24-openclaw-deep-04-esp32-myth-busting/</guid><description>This article does not describe the ESP32 Edge Agent as a cool technology trial, but dismantles the four most common misunderstandings: running the board does not mean the system is usable, being offline is not just a network problem, and local success does not mean on-site maintainability. Edge deployments require new engineering assumptions.</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><category>interpretation</category><category>original-interpretation</category><category>openclaw</category><category>esp32</category><category>edge-agent</category></item></channel></rss>