# Koboto.ai

<mark style="color:red;">(From 1 .compute 2.verifiability 3. access of ai services ( interchangeable text or typewrite scheme)</mark>

A**gent marketplace for multi-interactive, intent based & inference aggregation**&#x20;

DEPLOY AGENT   ||   STITCH AGENT&#x20;

<mark style="color:red;">(these are two separate button horizontally av. on the side of each other)</mark>

{% hint style="info" %} <mark style="color:red;">Disclaimer for the dev. Team – Text mention in red ink will be functions and interactive clicks to add on the website.Please take a look into it. Design inspiration can be taken from –  kusama (for fontbases) ,</mark>  [<mark style="color:red;">https://www.together.ai/</mark>](https://www.together.ai/) <mark style="color:red;">(design & color ) , <https://initia.xyz/> ,</mark>[<mark style="color:red;">https://www.paradromics.com/</mark> ](https://www.paradromics.com/) <mark style="color:red;">,</mark> [<mark style="color:red;">https://www.modulus.xyz/</mark>](https://www.modulus.xyz/)

<mark style="color:red;">, other function we can during the meeting.</mark>

<mark style="color:red;">Red text will not be added in the website it is just for developers knowledge graph.</mark>

<mark style="color:red;">Whenever ℹ️ will be there hover function has to be used .</mark>

<mark style="color:red;">Hover function REPRESENT METADATA.  CHECK THE COMMENT BOX TO READ THE UPGRADES WILL BE MENTIONED AS ( #DEVELOPERS LOOKOUT). on the side with +1 oe 2 ,3.</mark>

<mark style="color:red;">ONCE FIGMA DESIGNS ARE READY HAVE TO BE INSERTED IN GITBOOK WITH THEIR SPECIED PAGES.</mark>
{% endhint %}

*Modularizing inference with koboto network multi node infrastructure like proof* nodes, model caching nodes, and privacy nodes.

Leveraging ZK, optimistic and probabilistic proofs and for privacy further integrating MPC & integrating privacy technologies like Linear Secret Sharing Scheme (LSSS) .[**​**](https://docs.nillion.com/glossary#linear-secret-sharing-scheme-lsss)

***

ECOSYSTEM PARTNERS  <mark style="color:red;">(can hover from side to another)</mark>

***

**Solving trilemma for onchain inference**

Koboto network imagined a future where model & agent economy will be around solving inference desired and verifiability while taking compute & community in account.

<figure><img src="https://content.gitbook.com/content/txq6TRG8eFOkB0cWx9RP/blobs/3f5JSv88HuIVlPKDjfBg/trillemas%20of%20inference.png" alt=""><figcaption><p><mark style="color:red;">UI Dev. can recreate the diagram for better lookup on the website</mark></p></figcaption></figure>

Here's how we imagined the future of contributor on koboto universe -&#x20;

<details>

<summary>Open-Source</summary>

Leveraging the foundation of an open-source economy, the intersection of cryptocurrency and AI, along with positive-sum games, benefits network participants.

</details>

<details>

<summary>Compute</summary>

We incorporate computing by introducing heterogeneous edge computing and any desired cloud computing the node runners wants to use for inference tasks.&#x20;

Our AI agents conduct a symphony of diverse processors—CPUs, GPUs, TPUs, and others—collaborating seamlessly through the off chain node - dockerized container . We solve the compute bandwidth issue by leveraging processor diversity, resource allocation, optimal partitioning between power, speed, and memory, transforming data into actionable insights exactly where they are most needed.

</details>

<details>

<summary>Inference verifiability</summary>

Our modular approach combines multi node architecture for inference verifiability by constructing proofs through user choice by leveraging zk , optimistic or probabilistic proof enzymes.

</details>

<details>

<summary>Inference engine &#x26; toolkit</summary>

Leveraging ONNX format to perform inference with models created in different frameworks for bridging the gap between diverse ML libraries. While ONNX (Open Neural Network exchange ) engine serves as a versatile machine-learning model accelerator, supporting various use cases for inference.

While leveraging TGI {**Text Generation Inference}** as a toolkit developed by Hugging Face for deploying and serving Large Language Models (LLMs) efficiently. TGI acts as an intermediary layer between your application and the actual LLM model.

While proving support for the closed sourced and custom build inference engine & toolkit to provide versatility around model acceleration around blockchain specific datasets.

<br>

</details>

<details>

<summary>Modular &#x26; Dynamic messaging</summary>

We use Noise protocol framework for dynamic connection over the multi-agent, in order to achieve the desired inference. Agents initially form groups using a handshake protocol. During the various handshake patterns, encryption options & key exchange methods; they exchange cryptographic keys, establish trust, and define their roles within the group. Once grouped, agents collaborate to achieve a specific inference task. Through noise protocol security, privacy and the flexibility choices when the current goal is achieved or needs change, agents can break their existing connections. They then reconfigure by forming new groups with different agents or reusing existing ones.

</details>

***

Agents in the koboto network are built on these foundational agents -

1. **Multi - interactive agents**&#x20;

Multi-interactive agents work together, sharing tasks and responsibilities to achieve a common goal. Each agent is independent, acting based on its observations and goals. In koboto network whenever a user requests an inference, interactions among agents are based on achieving the desired goal by acting cooperative, competitive, or neutral, they call each other through noise protocol.

2. **Intent - based agents**

We incorporate AI-powered solvers that can understand and efficiently execute complex user intents, even when dealing with nuanced requests. Instead of merely executing transactions based on explicit commands, “intents” allow users to delegate transaction construction and execution to koboto powered solvers. AI models equipped with NLP on KOBOTO.AI can interpret these intents with a level of nuance far beyond basic instructions.

3. **Inference aggregator agents**

Inference Aggregator Agent (IAA) acts as a bridge, dynamically connecting to other AI networks to fulfil user desired inference. IAA is a specialized AI entity responsible for gathering, combining, and refining inferences. It acts as an intermediary between users and various AI networks. IAA works dynamically over local knowledge base and other source networks which IAA is already aware or registered over koboto network for finding the desired inference by the user. Koboto IAA combines inferences using techniques like ensemble methods, weighted averaging & consensus algorithms. The aggregated inference is then presented to the user.

**Multi interactive behaviour, focusing on user intent and finding the best & desired inference through aggregation act as the foundation pillars of the koboto network.**

***

AGENTS

<mark style="color:red;">Agent  page content will be transfer here ( take inspiration from eigen layer)</mark>

***

FOOTER <mark style="color:red;">( with social media and links to ther pages )</mark>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://asvas-organization.gitbook.io/kobotoai/koboto.ai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
