JES Boot Flow

The booting flow of the kernel of JES system follows that defined by the application processor SoC vendor. After the booting of kernel, the ‘init’ process used is ‘BusyBox init’. Different from ‘SysV init’, ‘Busybox init’ doesn’t support ‘runlevels’. So the field ‘runlevels’ is ignored. After ‘init’ starts, the following actions are executed:

TIYCam Software Development Guide

Introduction The TIYCam Software is called JES (Jovision Embedded System). JES is designed using the layers and blocks concepts. The layer concept is from the vertical point of view of the software system, while the block concept is from the horizontal point of view of the software system. JES uses the bus mode, i.e., JES[…]

Hardware Specifications of TIYCam

The hardware of TIYcam consists of four modules: the mainboard, sensor board, I/O extension board, and the lens driver board. Mainboard: Dimension: 42mmx42mm, mounting holes with a spacing of 38mm x 38mm, thickness: 1.6mm. Silk screen color: Black. Components: Interfaces: Pin Name 1 KEY 2 TNTX+ 3 TNTX- 4 TNRX+ 5 TNRX- 6 LED_ACT 7[…]

The Next Frontier For Large Language Models Is Biology

Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology. One striking theme has emerged from the long march of research progress across biochemistry, molecular biology[…]

MLC LLM: Enable everyone to develop, optimize and deploy AI models natively on everyone’s devices.

MLC LLM is a universal solution that allows any language models to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases. Our mission is to enable everyone to develop, optimize and deploy AI models natively on everyone’s devices. Everything runs locally with no server[…]

Chinese Llama2 and Multimodal LlaVA

In July 19, Meta released Llama2. The second day, Chinese Llama2 was released, the project can be found at: More impressively, it also comes with a multimodal version, called LlaVA, that also to talk to image and audio. It unifies the embeddings of text, audio and image as shown below: The github of Chinese[…]

AI Powered Call Center Intelligence Accelerator

The Call Center Intelligence Accelerator drives huge cost saving in call center operations while improving call center efficiency & customer satisfaction. It uses Azure Speech, Azure Language and Azure OpenAI (GPT-3) services to analyze call center transcrips, extract and redact personally identifiable information (PII), summarize the transcription, and provides rich business insights that could be[…]

AI Blog

AnomalyGPT:Detecting Industrial Anomalies using LargeVision-Language Models Fine-tune Llama 2 in Google Colab Visual Studio Remote Development using SSH Visual Instruction Tuning: LLaVA: Large Language and Vision AssistantVisual Instruction Tuning GGML model format for Large Language Model OpenThaiGPT The Next Frontier For Large Language Models Is Biology How to Run Google Colab Locally: A Step-by-Step Guide[...]