In July 19, Meta released Llama2. The second day, Chinese Llama2 was released, the project can be found at: https://github.com/LinkSoul-AI/Chinese-Llama-2-7b More impressively, it also comes with a multimodal version, called LlaVA, that also to talk to image and audio. It unifies the embeddings of text, audio and image as shown below: The github of Chinese[…]
Query and summarize your documents or just chat with local private GPT LLMs using h2oGPT, an Apache V2 open-source project. The github can be found at: https://github.com/h2oai/h2ogpt It can not only talk to documents, but also images and supports Llama2.