NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Cognita – open-source RAG framework for modular applications (github.com)
Jianghong94 12 days ago [-]
Congrats on the launch!

I find it relevant to what I want to do next and put in some time to understand the application vs other stuff e.g. Langchain. And if my understanding is correct, what this tries to do is:

For a lot of typical web services, there're non-realtime batch-processing data processors, e.g. search engine's crawler and indexer, or database's OLAP system, Hadoop, spark, etc. Once their processing is done, they will output data in arelevant, easy-to-use form for real-time web services to consume, e.g. search engine's index, or a list of e-commerce's best selling items.

If we extend such analogy to today's LLM RAG application and compare it with an out-of-the-box Langchain or LlamaIndex implementation, we'll realize everything is in one process altogether. Of course, for demo purpose, they have to.

Cognita tries to fit in by splitting the process into real-time and not real-time parts, on top of existing LangChain and LlamaIndex, and comes with an API endpoint for each part and a web UI for user querying.

For my use case, I'm looking into setting up a very basic RAG-based internal doc QA app, to see if this helps with some of our notoriously bad wikis. So I'm likely going to use this UI and just shovel whatever simple LangChain or LlamaIndex implementation into it. I'm not that interested in the modular design. Honestly, I could see a couple of different ways each market segment approaches such a problem: for demo/mainly static document/low stake application, the need to periodically refresh vector-db is non-existent; for companies with enough engineering expertise, they'll likely put the data processing part into existing data processing framework; for the rest segment, they probably can also get away with putting the whole offline data processing into a very long python script, setup cron and call it a day.

---

I haven't look into RAG in a year or so, but my overall sensation is this: 1. the RAG layer (on top of vector-db) isn't technically difficult, vs say OS development, database development, etc, after all, text manipulation has been around since 60s. 2, since the whole LLM generation is very sensitive to prompt, an early, too rigid abstraction likely do more harm than good.

sdesol 12 days ago [-]
You might want to look at https://github.com/danswer-ai/danswer as well, as it sounds like their UI might be better suited for your use case.
s1lv3rj1nx 12 days ago [-]
Cognita works on top of langchain! For your use case, you might not even need to develop anything just index data and you are good to go

Try out different retrievers and test the accuracy and effectiveness for your use-case.

magaton 12 days ago [-]
Hello, a very interesting project. Conratulations for putting everything together. I have expressed some thoughts in the discussion sections of Cognita github repo: https://github.com/truefoundry/cognita/discussions/146 It would be great if the maintainers could reply.
s1lv3rj1nx 12 days ago [-]
Sure! I’ll check those :) Thank you for suggestions hoping for some awesome contributions from you :P
magaton 11 days ago [-]
Thanks, looking forward to your answers
dmundhra1992 12 days ago [-]
Congratulations on the launch! Will give this a try!

We were looking for a solution that would help our team test out the LLMs & prompts for repeatability and identifying edge cases.

The UI looks interesting, like a playground on top of the RAG framework, allowing the team to test out various prompts / configurations to handle edge cases, without requiring a lot of tech bandwidth!

s1lv3rj1nx 12 days ago [-]
Yeah! Do give it a try :) Experiment and develop great usecases!
parentheses 13 days ago [-]
Looks like a great product. I'll have to give it a try!

I like that the product seems to solve the RAG need only and not be an "everything framework" for LLMs. It makes for a richer seeming product for RAG while making other aspects of AI apps open for the user to choose their approach.

nikunjbjj 12 days ago [-]
Yes- the product is intended to solve specifically for the RAG use case in production.
johnea 12 days ago [-]
Whatever you do, never say "free software"!!!

That "freedom" stuff is commonism...

agutgutia1991 10 days ago [-]
Agreed, we should acknowledge that every open source by any company has some intent to be able to drive adoption of their core platform!
ComputerGuru 12 days ago [-]
Does a "web" data source only scrape the individual page or linked pages as well? I'm assuming the former. What would be the least painful way to ingest a knowledgebase (say a wiki-like site) from the web?
s1lv3rj1nx 12 days ago [-]
It can scrape linked pages too by defining the depth but make sure the depth parameter is not too much else it will consume too much memory and time.
ComputerGuru 11 days ago [-]
Playing around with the UI, I cannot see where that depth would be set. Is it not a per-datasource variable?

Is the "scrape linked pages" configured to be "sandboxed" within a url hierarchy (so adding example.com/foo/ would add all linked pages that are also under example.com/foo/) or not (so it would also include linked pages to other domains or subfolders)?

TechSageWow 12 days ago [-]
This product appears to be promising. I'm intrigued to test it out. I appreciate that it focuses solely on addressing the RAG requirement and doesn't attempt to be a one-size-fits-all solution for LLMs.
s1lv3rj1nx 12 days ago [-]
Indeed! There is no one size fits all, the more you customise the closer you are to your usecase!
hiteshvyas11_ 11 days ago [-]
Interesting, is there any feature roadmap for future reference ?
supreetgupta 10 days ago [-]
Hey Hitesh, thanks to our contributors, we've introduced some exciting new features to Cognita:

1. Added VLM-based PDF parser 2. Integrated an intelligent summary query controller. Now, you can input multiple questions at once, and the controller will break them down into individual queries, answering each in a streaming format. Finally, it provides a summary of all responses.

Roadmap / Anticipated Contribution Scope:

1. Enabling hybrid and sparse vector search support 2. Implementing Embedding Quantization support 3. Integrating with GraphDBs and relevant retrievers 4. Enabling RAG Evaluation across various retrievers 5. Implementing RAG Visualization features ...and many other enhancements are awaiting.

Excited for the community's backing! Let's maintain the momentum of open source.

sagarpandey1 12 days ago [-]
Congratulations and good luck.Will give this a try!
s1lv3rj1nx 12 days ago [-]
Thanks! Awaiting your feedback.
esafak 13 days ago [-]
Many of the links are broken and lead to https://www.truefoundry.com/cognita-launch#

I tried on Firefox and Chrome.

I would make the GitHub link more prominent.

Congratulations and good luck.

supreetgupta 13 days ago [-]
Thanks for highlighting that! Here’s the GitHub link: https://github.com/truefoundry/cognita
namanyayg 13 days ago [-]
Congrats on the launch Supreet! Can you talk about how Cognita compares against competitors like RAGFlow?
nikunjbjj 12 days ago [-]
While a lot of RAG frameworks like Ragflow, langchain, llama index help in development phase of RAG, Cognita is developed to help productionize them well. In fact, it’s not cognita or others but cognita with others. Cognita leverages existing amazing open source frameworks and helps you organize the code in a manner that is easy to produtionize.

The api endpoints for all modules is a major plus. Besides, the UI for testing out different configurations is helpful for debugging and improvement and sharing with the rest of the world.

vivek0203 11 days ago [-]
Congratulations on the launch. I am building GenAI application. Will explore it.
agutgutia1991 10 days ago [-]
You could try to use the same in local or even a hosted version Vivek. Let us know if you face any issues. For early start-ups, there's a free tier to operate by connecting to any of your cloud accounts.
b2bsaas00 12 days ago [-]
What’s best practice to integrate this in a Ruby on Rails application?
jerpint 12 days ago [-]
It seems to be a python app, so probably set it up as a seperate microservice with its own REST API
12 days ago [-]
F-Lexx 12 days ago [-]
Best practice is to NOT integrate this in a Ruby on Rails application.
nikunjbjj 12 days ago [-]
But you can run Cognita as is and you’ll get a fastapi server up and running. With that you can utilize the rest endpoints with your Ruby app.
adastra22 13 days ago [-]
What is RAG?
vintagedave 13 days ago [-]
Retrieval Augmented Generation.

The best explanation I can give as a non-expert is: it's used when you have a general-purpose LLM but want to give it some domain-specific knowledge. The query sent to the LLM is run through what's effectively a search engine that catches relevant terms etc, to find useful snippets of knowledge to send to the LLM alongside the query, so the query is _augmented_ with potentially useful information for answering the query.

hobs 12 days ago [-]
And really almost always its because the LLMs are really good at summarization and ok at extrapolation and generally lie a lot otherwise.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 10:12:24 GMT+0000 (Coordinated Universal Time) with Vercel.