<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Ai on Viktor Gamov</title><link>https://gamov.io/tags/ai/</link><description>Recent content in Ai on Viktor Gamov</description><generator>Hugo</generator><language>en-us</language><copyright>2011–2026 Viktor Gamov</copyright><lastBuildDate>Mon, 30 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://gamov.io/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>When AI Alignment Experts Can't Align Their Own AI</title><link>https://gamov.io/posts/openclaw-vs-nanoclaw/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><guid>https://gamov.io/posts/openclaw-vs-nanoclaw/</guid><description>&lt;h2 id="tldr" class="discrete"&gt;TLDR&lt;/h2&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Meta’s director of AI alignment told her agent &amp;#34;confirm before acting.&amp;#34; It deleted her entire inbox anyway.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OpenClaw hit 250k GitHub stars in 60 days, its codebase hit 400,000 lines, and about 12% of its skill marketplace was malware.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;NanoClaw runs every agent in its own Linux container with 4,000 lines of code you can read in one sitting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I picked a third option. It runs on the JVM.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;</description></item><item><title>Twelve Million Java Developers and the AI Ecosystem Forgot About Them</title><link>https://gamov.io/posts/langchain4j/</link><pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate><guid>https://gamov.io/posts/langchain4j/</guid><description>&lt;h2 id="tldr" class="discrete"&gt;TLDR&lt;/h2&gt;
&lt;div class="ulist"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;There are twelve million Java developers and the AI ecosystem built everything for Python first. LangChain4j fixes that.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can wire up an LLM, give it tools (database queries, API calls, search), and let it decide which tool to use and when. That’s an agent, built in Java.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It handles prompt templates, chat memory, RAG pipelines, MCP support, and tool calling across 20+ LLM providers through a single API.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;</description></item></channel></rss>