An assistive device interface for LLM streaming using the T.140 real-time text protocol. This application helps disabled users, particularly those who are deaf or hard-of-hearing, by providing real-time text communication from large language models (LLMs) to their assistive devices.
- Connect to assistive devices via WebSocket or RTP transport using the T.140 protocol
- Stream responses from OpenAI and Anthropic LLMs to assistive devices
- Web-based admin interface for device management
- Support for different device types (hearing, visual, mobility, cognitive)
- Customizable character rate limiting for assistive devices
- File-based persistence for device configuration
- Node.js (v14 or higher)
- npm or pnpm
- OpenAI and/or Anthropic API keys (optional, but required for LLM streaming)
- Clone the repository:
git clone https://github.com/yourusername/assistive-llm.git
cd assistive-llm
- Install dependencies:
npm install
# or
pnpm install
- Build the application:
npm run build
# or
pnpm build
Create a .env
file in the root directory with the following environment variables:
# Server configuration
PORT=3000
HOST=localhost
# LLM configuration
DEFAULT_LLM_PROVIDER=openai
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4
ANTHROPIC_API_KEY=your_anthropic_api_key
ANTHROPIC_MODEL=claude-3-sonnet-20240229
# Logging configuration
LOG_LEVEL=info
LOG_FILE=assistive-llm.log
# Database configuration
DB_PATH=./data
- Start the server:
npm start
# or
pnpm start
- For development with auto-restart:
npm run dev
# or
pnpm dev
- Access the admin interface at
http://localhost:3000
GET /api/devices
- Get all devicesGET /api/devices/:id
- Get a specific devicePOST /api/devices
- Add a new devicePUT /api/devices/:id
- Update a deviceDELETE /api/devices/:id
- Delete a devicePOST /api/devices/:id/connect
- Connect to a devicePOST /api/devices/:id/disconnect
- Disconnect from a deviceGET /api/devices/connections/active
- Get all active connections
GET /api/llm/providers
- Get available LLM providersPOST /api/llm/stream/:deviceId
- Stream LLM response to a devicePOST /api/llm/stream-multiple
- Stream LLM response to multiple devices
- The application follows a layered architecture with controllers, services, and routes
- Device configurations are persisted to disk using a simple file-based storage system
- Real-time streaming is handled by the t140llm library
- The web admin interface uses vanilla JavaScript and makes API calls to the backend
MIT