Porównaj commity

...

9 Commity

Autor SHA1 Wiadomość Data
pluja ca760f0d33 Fix data race with messageGPT 2023-04-19 21:41:33 +02:00
pluja e701b77037 Fix so it works with users without name 2023-04-19 21:27:25 +02:00
pluja 585adc1916 Fix database connection 2023-04-19 20:58:59 +02:00
pluja 2f2d0cc671 rename 2023-04-19 18:25:19 +02:00
pluja 2a1c92093a add env.example 2023-04-19 18:24:53 +02:00
pluja 59229789ba fix compose 2023-04-19 18:17:43 +02:00
pluja c1879465d6 update docker compose 2023-04-19 18:16:32 +02:00
pluja 486a6157a9 Update to a much better code and features 2023-04-19 18:14:20 +02:00
pluja 6502efa131 update to use sqlite 2023-04-19 12:53:18 +02:00
9 zmienionych plików z 441 dodań i 278 usunięć

5
.gitignore vendored
Wyświetl plik

@ -1,3 +1,4 @@
docker-compose.yml
venv/ venv/
.env .env
*.db
__pycache__/

Wyświetl plik

@ -1,8 +1,14 @@
FROM python:3-bullseye FROM python:3.10-slim
RUN apt update && apt install -y ffmpeg RUN apt update && apt install -y ffmpeg libespeak1
WORKDIR /app WORKDIR /app
COPY ./main.py /app COPY ./main.py /app
COPY ./database.py /app
COPY ./requirements.txt /app COPY ./requirements.txt /app
RUN mkdir db_data
RUN pip install --upgrade pip
RUN pip install -r requirements.txt RUN pip install -r requirements.txt
CMD [ "python3", "/app/main.py" ] CMD [ "python3", "/app/main.py" ]

Wyświetl plik

@ -7,16 +7,20 @@ A telegram bot to interact with OpenAI API. You can:
- Generate images with DALL-E: `/imagine` - Generate images with DALL-E: `/imagine`
- Chat with ChatGPT: Just chat! - Chat with ChatGPT: Just chat!
- Transcribe audio and video to text: Just send a voice message or a video file! - Transcribe audio and video to text: Just send a voice message or a video file!
- Voice chat with ChatGPT:
- Send voice message.
- Receive voice messages.
- Use GoogleTTS or 100% local Espeak (more robotic).
Other features include: Other features include:
- Talk to ChatGPT with audio transcriptions (whisper). - Talk to ChatGPT with audio transcriptions (whisper).
- Clear ChatGPT context history (to save tokens). - Clear ChatGPT context history (to save tokens).
- Reply to any message to use it as context for ChatGPT. - Reply to any message to use it as context for ChatGPT.
- Per-user context and usage metrics and spent $. - Per-user context.
- No database, data is saved in-memory. - See usage metrics and spent $.
- A drawback of this is that data is reset on each docker restart. Will look into solutions for this. - Persistent data with sqlite3 database.
- Lightweight: a single python file. - Lightweight: few lines of code.
[Jump to selfhosting guide](#self-hosting) [Jump to selfhosting guide](#self-hosting)
@ -42,14 +46,15 @@ Self hosting this chatbot is pretty easy. You just need to follow this steps:
4. Setup the bot: 4. Setup the bot:
1. Clone this repo. 1. Clone this repo.
2. Rename the `example.docker-compose.yml` file to `docker-compose.yml`. 2. Rename the `example.env` file to `.env`.
3. Edit the environment variables: 3. Edit the environment variables from the `.env` file:
1. Set your OPENAI_TOKEN. 1. Set your OPENAI_TOKEN.
2. Set your BOT_TOKEN. 2. Set your BOT_TOKEN.
3. Set your ALLOWED_USERS (comma separated user ids). Set it to `*` to allow all users. 3. Set your ALLOWED_USERS (comma separated user ids). Set it to `*` to allow all users.
4. Set the SYSTEM_PROMPT for ChatGPT. This is always instructed to ChatGPT as the system. 4. Set the SYSTEM_PROMPT for ChatGPT. This is always instructed to ChatGPT as the system.
5. Optional: Edit the MAX_CONTEXT. This variable sets the number of messages that will be sent to ChatGPT API as context for the conversation. 5. Optional: Edit the MAX_CONTEXT. This variable sets the number of messages that will be sent to ChatGPT API as context for the conversation.
6. WHISPER_TO_CHAT allows you to choose wether Whisper transcripts should be instructed to ChatGPT or not. 6. WHISPER_TO_CHAT allows you to choose wether Whisper transcripts should be instructed to ChatGPT or not.
6. ENABLE_GOOGLE_TTS the TTS service will be provided by GoogleTTS, producing more natural voices.
4. Build and start the bot: `docker compose up --build -d`. 4. Build and start the bot: `docker compose up --build -d`.
5. Enjoy! 5. Enjoy!

116
database.py 100644
Wyświetl plik

@ -0,0 +1,116 @@
import sqlite3
import json
DB_PATH = "db_data/users.db"
def init_database():
conn = sqlite3.connect(DB_PATH)
c = conn.cursor()
c.execute("""
CREATE TABLE IF NOT EXISTS users (
chat_id TEXT PRIMARY KEY,
context TEXT,
usage_chatgpt INTEGER,
usage_whisper INTEGER,
usage_dalle INTEGER,
whisper_to_chat INTEGER,
assistant_voice_chat INTEGER,
temperature REAL,
max_context INTEGER
)
""")
print("Database initialized")
conn.commit()
conn.close()
def get_user(chat_id: str):
conn = sqlite3.connect(DB_PATH)
c = conn.cursor()
c.execute("SELECT * FROM users WHERE chat_id = ?", (chat_id,))
user = c.fetchone()
conn.close()
if user:
return {
"context": json.loads(user[1]),
"usage": {
"chatgpt": user[2],
"whisper": user[3],
"dalle": user[4]
},
"options": {
"whisper_to_chat": bool(user[5]),
"assistant_voice_chat": bool(user[6]),
"temperature": user[7],
"max-context": user[8]
}
}
return None
def add_user(chat_id: str, user_data):
conn = sqlite3.connect(DB_PATH)
c = conn.cursor()
c.execute("""
INSERT INTO users (
chat_id, context, usage_chatgpt, usage_whisper, usage_dalle,
whisper_to_chat, assistant_voice_chat, temperature, max_context
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
chat_id,
json.dumps(user_data["context"]),
user_data["usage"]["chatgpt"],
user_data["usage"]["whisper"],
user_data["usage"]["dalle"],
int(user_data["options"]["whisper_to_chat"]),
int(user_data["options"]["assistant_voice_chat"]),
user_data["options"]["temperature"],
user_data["options"]["max-context"]
))
conn.commit()
conn.close()
def update_user(chat_id: str, user_data):
conn = sqlite3.connect(DB_PATH)
c = conn.cursor()
c.execute("""
UPDATE users
SET
context = ?,
usage_chatgpt = ?,
usage_whisper = ?,
usage_dalle = ?,
whisper_to_chat = ?,
assistant_voice_chat = ?,
temperature = ?,
max_context = ?
WHERE chat_id = ?
""", (
json.dumps(user_data["context"]),
user_data["usage"]["chatgpt"],
user_data["usage"]["whisper"],
user_data["usage"]["dalle"],
int(user_data["options"]["whisper_to_chat"]),
int(user_data["options"]["assistant_voice_chat"]),
user_data["options"]["temperature"],
user_data["options"]["max-context"],
chat_id
))
conn.commit()
conn.close()
def get_total_usage():
conn = sqlite3.connect(DB_PATH)
c = conn.cursor()
c.execute("""
SELECT
SUM(usage_chatgpt) AS total_chatgpt,
SUM(usage_whisper) AS total_whisper,
SUM(usage_dalle) AS total_dalle
FROM users
""")
total_usage = c.fetchone()
conn.close()
return {
"chatgpt": total_usage[0],
"whisper": total_usage[1],
"dalle": total_usage[2]
}

Wyświetl plik

@ -0,0 +1,8 @@
version: '3.9'
services:
chatbot:
build: .
env_file:
- .env
volumes:
- ./db_data:/app/db_data

Wyświetl plik

@ -1,13 +0,0 @@
version: '3.9'
services:
chatbot:
build: .
environment:
OPENAI_API_KEY: "XXX"
OPENAI_MODEL: gpt-3.5-turbo
CHATGPT_SYSTEM_PROMPT: "You are a helpful assistant."
CHATGPT_TEMPERATURE: 1.0
CHATGPT_MAX_USER_CONTEXT: 5
WHISPER_TO_CHAT: 1
BOT_TOKEN: "XXX"
BOT_ALLOWED_USERS: USER_ID_1,USER_ID_2

14
example.env 100644
Wyświetl plik

@ -0,0 +1,14 @@
OPENAI_API_KEY=your-openai-api-key
OPENAI_MODEL=gpt-3.5-turbo
CHATGPT_SYSTEM_PROMPT=You are a helpful assistant. Always use Markdown for formatting.
CHATGPT_MAX_USER_CONTEXT=5
CHATGPT_TEMPERATURE=1.0
# Use Whisper transcript from voice message with ChatGPT
WHISPER_TO_CHAT=1
# Use Google TTS for speech to text
ENABLE_GOOGLE_TTS=0
BOT_TOKEN=your-telegram-bot-token
BOT_ALLOWED_USERS= XXXX,YYYY # Comma separated list of Telegram user IDs

525
main.py
Wyświetl plik

@ -1,163 +1,258 @@
import os
import re
import openai
import logging import logging
import asyncio import os
import math import tempfile
from functools import wraps
from io import BytesIO
import openai
from aiogram import Bot, Dispatcher, types
from aiogram.contrib.middlewares.logging import LoggingMiddleware
from aiogram.types import InlineKeyboardButton, InlineKeyboardMarkup, ParseMode
from aiogram.types.input_file import InputFile
from aiogram.utils import executor
from dotenv import load_dotenv from dotenv import load_dotenv
from gtts import gTTS
import pyttsx3
from pydub import AudioSegment from pydub import AudioSegment
from telegram import Update
from functools import wraps import database
from telegram.constants import ChatAction
from functools import wraps
from telegram.error import BadRequest, RetryAfter, TimedOut
from telegram.ext import ApplicationBuilder, CommandHandler, ContextTypes, MessageHandler, filters, CallbackQueryHandler
from telegram import InlineKeyboardButton, InlineKeyboardMarkup
logging.basicConfig( logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', format='%(asctime)s - %(levelname)s - %(message)s',
level=logging.INFO level=logging.INFO
) )
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Envrionment Variables Load
load_dotenv() load_dotenv()
if os.environ.get("OPENAI_API_KEY") is None: if os.environ.get("OPENAI_API_KEY") is None:
print("OpenAI_API_KEY is not set in.env file or OPENAI_API_KEY environment variable is not set") print("OpenAI_API_KEY is not set in.env file or OPENAI_API_KEY environment variable is not set")
exit(1) exit(1)
ALLOWED_USERS=os.environ.get("BOT_ALLOWED_USERS").split(",") BOT_TOKEN = os.getenv("BOT_TOKEN")
SYSTEM_PROMPT=os.environ.get("CHATGPT_SYSTEM_PROMPT")
TEMPERATURE=os.environ.get("CHATGPT_TEMPERATURE")
MODEL=os.environ.get("OPENAI_MODEL")
WHISPER_TO_CHAT=bool(int(os.environ.get("WHISPER_TO_CHAT")))
MAX_USER_CONTEXT=int(os.environ.get("CHATGPT_MAX_USER_CONTEXT"))
bot = Bot(token=BOT_TOKEN)
dp = Dispatcher(bot)
dp.middleware.setup(LoggingMiddleware())
ALLOWED_USERS = os.environ.get("BOT_ALLOWED_USERS").split(",")
SYSTEM_PROMPT = os.environ.get("CHATGPT_SYSTEM_PROMPT")
TEMPERATURE = os.environ.get("CHATGPT_TEMPERATURE")
MODEL = os.environ.get("OPENAI_MODEL")
WHISPER_TO_CHAT = bool(int(os.environ.get("WHISPER_TO_CHAT")))
ENABLE_GOOGLE_TTS = bool(int(os.environ.get("ENABLE_GOOGLE_TTS")))
MAX_USER_CONTEXT = int(os.environ.get("CHATGPT_MAX_USER_CONTEXT"))
openai.api_key = os.environ.get("OPENAI_API_KEY") openai.api_key = os.environ.get("OPENAI_API_KEY")
users = {
"userid": { async def getUserData(chat_id):
"context": [], user_data = database.get_user(chat_id)
"usage": { if not user_data:
"chatgpt": 0, user_data = {
"whisper": 0, "context": [],
"dalle": 0, "usage": {"chatgpt": 0, "whisper": 0, "dalle": 0},
}, "options": {
"options": { "whisper_to_chat": WHISPER_TO_CHAT,
"whisper-to-chat": WHISPER_TO_CHAT, "assistant_voice_chat": False,
"temperature": 0.9, "temperature": float(TEMPERATURE),
"max-context": 5 "max-context": MAX_USER_CONTEXT
}
} }
}, database.add_user(chat_id, user_data)
} user_data = database.get_user(chat_id)
return user_data
def generate_settings_markup(chat_id: str) -> InlineKeyboardMarkup:
keyboard = [
[
InlineKeyboardButton("Increase Temperature", callback_data=f"setting_inc_temp_{chat_id}"),
InlineKeyboardButton("Decrease Temperature", callback_data=f"setting_dec_temp_{chat_id}")
],
[
InlineKeyboardButton("Enable Whisper", callback_data=f"setting_en_whisper_{chat_id}"),
InlineKeyboardButton("Disable Whisper", callback_data=f"setting_dis_whisper_{chat_id}")
],
[
InlineKeyboardButton("Enable assistant voice", callback_data=f"setting_en_voice_{chat_id}"),
InlineKeyboardButton("Disable assistant voice", callback_data=f"setting_dis_voice_{chat_id}")
],
[
InlineKeyboardButton("Increase Context", callback_data=f"setting_inc_context_{chat_id}"),
InlineKeyboardButton("Decrease Context", callback_data=f"setting_dec_context_{chat_id}")
]
]
return InlineKeyboardMarkup(inline_keyboard=keyboard)
async def text_to_voice(text: str) -> BytesIO:
if ENABLE_GOOGLE_TTS:
tts = gTTS(text)
with tempfile.NamedTemporaryFile(mode='wb', suffix='.mp3', delete=False) as mp3_file:
temp_mp3_filename = mp3_file.name
tts.save(temp_mp3_filename)
else:
engine = pyttsx3.init() # PyTTSX3 Engine
engine.setProperty('rate', 150)
with tempfile.NamedTemporaryFile(mode='wb', suffix='.mp3', delete=False) as mp3_file:
temp_mp3_filename = mp3_file.name
engine.save_to_file(text, temp_mp3_filename)
engine.runAndWait()
mp3_audio = AudioSegment.from_file(temp_mp3_filename, format="mp3")
with tempfile.NamedTemporaryFile(mode="wb", suffix=".ogg", delete=False) as ogg_file:
temp_ogg_filename = ogg_file.name
mp3_audio.export(temp_ogg_filename, format="ogg")
with open(temp_ogg_filename, "rb") as audio_file:
_ = InputFile(audio_file)
voice_data = BytesIO(audio_file.read())
os.remove(temp_mp3_filename)
os.remove(temp_ogg_filename)
voice_data.seek(0)
return voice_data
def restricted(func): def restricted(func):
@wraps(func) @wraps(func)
async def wrapped(update, context, *args, **kwargs): async def wrapped(message, *args, **kwargs):
user_id = update.effective_user.id user_id = str(message.chat.id)
if str(user_id) not in ALLOWED_USERS: if user_id not in ALLOWED_USERS:
if "*" != ALLOWED_USERS[0]: if "*" != ALLOWED_USERS[0]:
print(f"Unauthorized access denied for {user_id}.") print(f"Unauthorized access denied for {user_id}.")
return return
else: else:
if not f"{update.effective_chat.id}" in users: _ = await getUserData(user_id)
users[f"{update.effective_chat.id}"] = {"context": [], "usage": {"chatgpt": 0,"whisper": 0,"dalle": 0,}, "options": {"whisper-to-chat": WHISPER_TO_CHAT, "temperature": float(TEMPERATURE), "max-context": MAX_USER_CONTEXT}} return await func(message, *args, **kwargs)
return await func(update, context, *args, **kwargs)
return wrapped return wrapped
async def messageGPT(text: str, chat_id: str):
# Initialize user if not present
if chat_id not in users:
users[chat_id] = {"context": [], "usage": {"chatgpt": 0,"whisper": 0,"dalle": 0,}, "options": {"whisper-to-chat": WHISPER_TO_CHAT, "temperature": float(TEMPERATURE), "max-context": MAX_USER_CONTEXT}}
# Update context async def messageGPT(text: str, chat_id: str, user_name="User", user_data={}):
user_context = users[chat_id]["context"] await bot.send_chat_action(chat_id, action=types.ChatActions.TYPING)
user_context.append({"role": "user", "content": text}) user_data['context'].append({"role": "user", "content": text})
if len(user_context) > users[chat_id]["options"]["max-context"]: if len(user_data['context']) > user_data["options"]["max-context"]:
user_context.pop(0) user_data['context'].pop(0)
# Interact with ChatGPT API and stream the response
response = None
try: try:
response = openai.ChatCompletion.create( response = openai.ChatCompletion.create(
model=MODEL, model=MODEL,
messages=[{"role": "system", "content": SYSTEM_PROMPT}] + user_context, messages=[{"role": "system", "content": f"You are chatting with {user_name}. {SYSTEM_PROMPT}"}] + user_data['context'],
temperature=users[chat_id]["options"]["temperature"], temperature=user_data["options"]["temperature"],
) )
except: except Exception as e:
return "There was a problem with OpenAI, so I can't answer you." print(e)
return f"There was a problem with OpenAI, so I can't answer you: \n\n{e}"
# Initialize variables for streaming assistant_message = response.get('choices', [{}])[0].get('message', {"content": None}).get("content", "There was a problem with OpenAI. Maybe your prompt is forbidden? They like to censor a lot!")
assistant_message = ""
if 'choices' in response:
assistant_message = response['choices'][0]['message']['content']
else:
assistant_message = "There was a problem with OpenAI. Maybe your prompt is forbidden? They like to censor a lot!"
# Update context user_data['context'].append({"role": "assistant", "content": assistant_message})
user_context.append({"role": "assistant", "content": assistant_message}) if len(user_data['context']) > user_data["options"]["max-context"]:
if len(user_context) > users[chat_id]["options"]["max-context"]: user_data['context'].pop(0)
user_context.pop(0)
# Update usage user_data["usage"]['chatgpt'] += int(response.get('usage', {"total_tokens": 0})["total_tokens"])
users[chat_id]["usage"]['chatgpt'] += int(response['usage']['total_tokens'])
return assistant_message database.update_user(chat_id, user_data)
return assistant_message, user_data
@dp.message_handler(commands=['start'])
@restricted @restricted
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE): async def start(message: types.Message):
if not f"{update.effective_chat.id}" in users: _ = await getUserData(message.chat.id)
users[f"{update.effective_chat.id}"] = {"context": [], "usage": {"chatgpt": 0,"whisper": 0,"dalle": 0,}, "options": {"whisper-to-chat": WHISPER_TO_CHAT, "temperature": float(TEMPERATURE), "max-context": MAX_USER_CONTEXT}} await message.reply("Hello, how can I assist you today?")
await context.bot.send_message(chat_id=update.effective_chat.id, text="I'm a bot, please talk to me!")
@dp.message_handler(commands=['clear'], content_types=['text'])
@restricted
async def clear(message: types.Message) -> None:
chat_id = str(message.chat.id)
user_data = await getUserData(chat_id)
if user_data:
user_data["context"] = []
database.update_user(chat_id, user_data)
print(f"Cleared context for {message.from_user.full_name}")
await message.reply("Your message context history was cleared.")
@dp.message_handler(commands=['usage'])
@restricted
async def usage(message: types.Message) -> None:
chat_id = str(message.chat.id)
user_data = database.get_user(chat_id)
user_usage = user_data["usage"]
total_usage = database.get_total_usage()
@restricted user_spent = round((((user_usage['chatgpt'] / 750) * 0.002) + (float(user_usage['dalle']) * 0.02) + ((user_usage['whisper'] / 60.0) * 0.006)), 4)
async def imagine(update: Update, context: ContextTypes.DEFAULT_TYPE): total_spent = round((((total_usage['chatgpt'] / 750) * 0.002) + (float(total_usage['dalle']) * 0.02) + ((total_usage['whisper'] / 60.0) * 0.006)), 4)
users[f"{update.effective_chat.id}"]["usage"]['dalle'] += 1
await context.bot.send_chat_action(chat_id=update.effective_chat.id, action=ChatAction.TYPING) user_percentage = (user_spent / total_spent) * 100 if total_spent > 0 else 0
info_message = f"""User: {message.from_user.full_name}
- Used ~{user_usage["chatgpt"]} tokens with ChatGPT.
- Generated {user_usage["dalle"]} images with DALL-E.
- Transcribed {round(float(user_usage["whisper"]) / 60.0, 2)}min with Whisper.
Total spent: ${user_spent} ({user_percentage:.2f}% of total)
Total usage:
- ChatGPT tokens: {total_usage["chatgpt"]}
- DALL-E images: {total_usage["dalle"]}
- Whisper transcription: {round(float(total_usage["whisper"]) / 60.0, 2)}min
Total spent: ${total_spent}"""
await message.reply(info_message)
@dp.message_handler(lambda message: message.chat.type == types.ChatType.PRIVATE, content_types=['text'], regexp='^/imagine')
@restricted
async def imagine(message: types.Message):
await bot.send_chat_action(message.chat.id, action=types.ChatActions.TYPING)
user_data = await getUserData(message.chat.id)
user_data["usage"]['dalle'] += 1
database.update_user(message.chat.id, user_data)
response = openai.Image.create( response = openai.Image.create(
prompt=update.message.text, prompt=message.text,
n=1, n=1,
size="1024x1024" size="1024x1024"
) )
try: try:
image_url = response['data'][0]['url'] image_url = response['data'][0]['url']
await context.bot.send_message(chat_id=update.effective_chat.id, text=image_url) await message.reply(image_url)
except: except Exception as e:
await context.bot.send_message(chat_id=update.effective_chat.id, text="Error generating. Your prompt may contain text that is not allowed by OpenAI safety system.") print(e)
await message.reply("Error generating. Your prompt may contain text that is not allowed by OpenAI safety system.")
@dp.message_handler(content_types=['photo', 'video', 'audio', 'voice'])
@restricted @restricted
async def attachment(update: Update, context: ContextTypes.DEFAULT_TYPE): async def attachment(message: types.Message):
# Initialize variables chat_id = message.chat.id
chat_id = update.effective_chat.id user_data = await getUserData(chat_id)
await context.bot.send_chat_action(chat_id=chat_id, action=ChatAction.TYPING) await bot.send_chat_action(chat_id, action=types.ChatActions.TYPING)
users[f"{chat_id}"]["usage"]['whisper'] = 0
transcript = {'text': ''} transcript = {'text': ''}
audioMessage = False audioMessage = False
# Check if the attachment is a voice message
if update.message.voice: if message.voice:
users[f"{chat_id}"]["usage"]['whisper'] += update.message.voice.duration user_data["usage"]['whisper'] += message.voice.duration
file_id = update.message.voice.file_id file_id = message.voice.file_id
file_format = "ogg" file_format = "ogg"
audioMessage = True audioMessage = True
# Check if the attachment is a video elif message.video:
elif update.message.video: user_data["usage"]['whisper'] += message.video.duration
users[f"{chat_id}"]["usage"]['whisper'] += update.message.video.duration file_id = message.video.file_id
file_id = update.message.video.file_id
file_format = "mp4" file_format = "mp4"
# Check if the attachment is an audio file elif message.audio:
elif update.message.audio: user_data["usage"]['whisper'] += message.audio.duration
users[f"{chat_id}"]["usage"]['whisper'] += update.message.audio.duration file_id = message.audio.file_id
file_id = update.message.audio.file_id
file_format = "mp3" file_format = "mp3"
else: else:
await context.bot.send_message(chat_id=chat_id, text="Can't handle such file. Reason: unknown.") await message.reply("Can't handle such file. Reason: unknown.")
return return
# Download the file and convert it if necessary file = await bot.get_file(file_id)
file = await context.bot.get_file(file_id) user_id = message.chat.id
user_id = update.effective_user.id await file.download(f"{user_id}.{file_format}")
await file.download_to_drive(f"{user_id}.{file_format}")
if file_format == "ogg": if file_format == "ogg":
ogg_audio = AudioSegment.from_file(f"{user_id}.ogg", format="ogg") ogg_audio = AudioSegment.from_file(f"{user_id}.ogg", format="ogg")
@ -165,174 +260,104 @@ async def attachment(update: Update, context: ContextTypes.DEFAULT_TYPE):
os.remove(f"{user_id}.ogg") os.remove(f"{user_id}.ogg")
file_format = "mp3" file_format = "mp3"
# Transcribe the audio
with open(f"{user_id}.{file_format}", "rb") as audio_file: with open(f"{user_id}.{file_format}", "rb") as audio_file:
try: try:
await bot.send_chat_action(chat_id, action=types.ChatActions.TYPING)
transcript = openai.Audio.transcribe("whisper-1", audio_file) transcript = openai.Audio.transcribe("whisper-1", audio_file)
except: except Exception as e:
await context.bot.send_message(chat_id=chat_id, text="Transcript failed.") print(e)
await message.reply("Transcript failed.")
os.remove(f"{user_id}.{file_format}") os.remove(f"{user_id}.{file_format}")
return return
os.remove(f"{user_id}.{file_format}") os.remove(f"{user_id}.{file_format}")
# Send the transcript
if transcript['text'] == "": if transcript['text'] == "":
transcript['text'] = "[Silence]" transcript['text'] = "[Silence]"
if audioMessage and users[f"{chat_id}"]["options"]["whisper-to-chat"]: chatGPT_response = False
chatGPT_response = await messageGPT(transcript['text'], str(chat_id)) if audioMessage and user_data["options"]["whisper_to_chat"]:
chatGPT_response, user_data = await messageGPT(transcript['text'], str(chat_id), message.from_user.full_name, user_data)
transcript['text'] = "> " + transcript['text'] + "\n\n" + chatGPT_response transcript['text'] = "> " + transcript['text'] + "\n\n" + chatGPT_response
# Check if the transcript length is longer than 4095 characters await message.reply(transcript['text'])
if len(transcript['text']) > 4095: if user_data["options"]["assistant_voice_chat"] and chatGPT_response:
# Split the transcript into multiple messages without breaking words in half await bot.send_chat_action(chat_id, action=types.ChatActions.TYPING)
max_length = 4096 voice_data = await text_to_voice(chatGPT_response)
words = transcript['text'].split() await message.reply_voice(voice_data)
current_message = ""
database.update_user(str(chat_id), user_data)
for word in words:
if len(current_message) + len(word) + 1 > max_length:
await context.bot.send_message(chat_id=chat_id, text=current_message)
current_message = ""
current_message += f"{word} "
if current_message:
await context.bot.send_message(chat_id=chat_id, text=current_message)
else:
await context.bot.send_message(chat_id=chat_id, text=transcript['text'])
@restricted @restricted
async def chat(update: Update, context: ContextTypes.DEFAULT_TYPE): @dp.message_handler(commands=['settings'])
chat_id = str(update.effective_chat.id) async def settings(message: types.Message):
await context.bot.send_chat_action(chat_id=chat_id, action=ChatAction.TYPING) chat_id = str(message.chat.id)
settings_markup = generate_settings_markup(chat_id)
await message.reply(text='Settings:', reply_markup=settings_markup)
async def settings_callback(callback_query: types.CallbackQuery):
user_data = await getUserData(callback_query.message.chat.id)
action, chat_id = callback_query.data.rsplit("_", 1)
options = user_data["options"]
if action.startswith("setting_inc_temp"):
options["temperature"] = min(options["temperature"] + 0.1, 1)
elif action.startswith("setting_dec_temp"):
options["temperature"] = max(options["temperature"] - 0.1, 0)
elif action.startswith("setting_en_whisper"):
options["whisper_to_chat"] = True
elif action.startswith("setting_dis_whisper"):
options["whisper_to_chat"] = False
elif action.startswith("setting_en_voice"):
options["assistant_voice_chat"] = True
elif action.startswith("setting_dis_voice"):
options["assistant_voice_chat"] = False
elif action.startswith("setting_inc_context"):
options["max-context"] = min(options["max-context"] + 1, MAX_USER_CONTEXT)
elif action.startswith("setting_dec_context"):
options["max-context"] = max(options["max-context"] - 1, 1)
settings_markup = generate_settings_markup(chat_id)
await callback_query.message.edit_text(text='Choose a setting option:', reply_markup=settings_markup)
database.update_user(chat_id, user_data)
settings_txt = f"Updated settings:\n\nTemperature: {options['temperature']}\nWhisper to Chat: {options['whisper_to_chat']}\nAssistant voice: {options['assistant_voice_chat']}\nContext Length: {options['max-context']}"
await callback_query.answer()
await callback_query.message.reply(text=settings_txt)
@dp.message_handler(lambda message: message.chat.type == types.ChatType.PRIVATE and not message.text.startswith("/"), content_types=['text'])
async def chat(message: types.Message):
chat_id = str(message.chat.id)
user_data = await getUserData(chat_id)
# Initialize user if not present user_prompt = message.text
if chat_id not in users: await bot.send_chat_action(chat_id, action=types.ChatActions.TYPING)
users[chat_id] = {"context": [], "usage": {"chatgpt": 0, "whisper": 0, "dalle": 0}} assistant_message, user_data = await messageGPT(user_prompt, chat_id, message.from_user.full_name, user_data)
# Check if replying and add context await message.reply(assistant_message, parse_mode=ParseMode.MARKDOWN)
if hasattr(update.message.reply_to_message, "text"):
user_prompt = f"In reply to: '{update.message.reply_to_message.text}' \n---\n{update.message.text}" if user_data["options"]["assistant_voice_chat"]:
else: await bot.send_chat_action(chat_id, action=types.ChatActions.TYPING)
user_prompt = update.message.text voice_data = await text_to_voice(assistant_message)
# Use messageGPT function to get the response await message.reply_voice(voice_data)
assistant_message = await messageGPT(user_prompt, chat_id)
await context.bot.send_message(chat_id=update.effective_chat.id, text=assistant_message)
@restricted
async def clear(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
if f"{update.effective_chat.id}" in users:
users[f"{update.effective_chat.id}"]["context"] = []
print(f"Cleared context for {update.effective_chat.id}")
await update.message.reply_text(f'Your message context history was cleared.')
@restricted
async def usage(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
user_info=users[f"{update.effective_chat.id}"]["usage"]
total_spent=0.0
total_spent+=(user_info['chatgpt']/750)*0.002
total_spent+=float(user_info['dalle'])*0.02
total_spent+=(user_info['whisper']/60.0)*0.006
info_message=f"""User: {update.effective_user.name}\n- Used ~{user_info["chatgpt"]} tokens with ChatGPT.\n- Generated {user_info["dalle"]} images with DALL-E.\n- Transcribed {round(float(user_info["whisper"])/60.0, 2)}min with Whisper.\n\nTotal spent: ${str(total_spent)}"""
await context.bot.send_message(chat_id=update.effective_chat.id, text=info_message)
@restricted
async def _help(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
help_message="""Here's what you can do:\n\n
- /imagine <prompt> to generate an image with DALL-E\n- Send a message to chat with ChatGPT\n
- Send an audio to transcribe to text with Whisper.\n\n
- /settings To change your settings.\n
- /usage To get your usage statistics.\n
- /clear To clear you chatgpt message context (start a new chat)."""
await context.bot.send_message(chat_id=update.effective_chat.id, text=help_message)
# Function to generate the settings buttons
def generate_settings_markup(chat_id: str) -> InlineKeyboardMarkup:
keyboard = [
[
InlineKeyboardButton("Increase Temperature", callback_data=f"setting_increase_temperature_{chat_id}"),
InlineKeyboardButton("Decrease Temperature", callback_data=f"setting_decrease_temperature_{chat_id}")
],
[
InlineKeyboardButton("Enable Whisper to Chat", callback_data=f"setting_enable_whisper_{chat_id}"),
InlineKeyboardButton("Disable Whisper to Chat", callback_data=f"setting_disable_whisper_{chat_id}")
],
[
InlineKeyboardButton("Increase Context", callback_data=f"setting_increase_context_{chat_id}"),
InlineKeyboardButton("Decrease Context", callback_data=f"setting_decrease_context_{chat_id}")
]
]
return InlineKeyboardMarkup(keyboard)
@restricted
async def settings(update: Update, context: ContextTypes.DEFAULT_TYPE):
chat_id = update.effective_chat.id
settings_markup = generate_settings_markup(chat_id)
await context.bot.send_message(chat_id=chat_id, text="Settings:", reply_markup=settings_markup)
async def settings_callback(update: Update, context: ContextTypes.DEFAULT_TYPE):
query = update.callback_query
action, chat_id = query.data.rsplit("_", 1)
if action.startswith("setting_increase_temperature"):
users[chat_id]["options"]["temperature"] = min(users[chat_id]["options"]["temperature"] + 0.1, 1)
elif action.startswith("setting_decrease_temperature"):
users[chat_id]["options"]["temperature"] = max(users[chat_id]["options"]["temperature"] - 0.1, 0)
elif action.startswith("setting_enable_whisper"):
print(f"enabling whisper for {chat_id}")
users[chat_id]["options"]["whisper-to-chat"] = True
elif action.startswith("setting_disable_whisper"):
print(f"disabling whisper for {chat_id}")
users[chat_id]["options"]["whisper-to-chat"] = False
elif action.startswith("setting_increase_context"):
users[chat_id]["options"]["max-context"] = min(users[chat_id]["options"]["max-context"] + 1, MAX_USER_CONTEXT)
elif action.startswith("setting_decrease_context"):
users[chat_id]["options"]["max-context"] = max(users[chat_id]["options"]["max-context"] - 1, 1)
settings_markup = generate_settings_markup(chat_id)
await query.edit_message_text(text="Choose a setting option:", reply_markup=settings_markup)
# Remove the settings message
await context.bot.delete_message(chat_id=query.message.chat_id, message_id=query.message.message_id)
# Send a message displaying the updated settings
settings_message = f"""Updated settings:\n\nTemperature: {users[chat_id]['options']['temperature']}\nWhisper to Chat: {users[chat_id]['options']['whisper-to-chat']}\nContext Length: {users[chat_id]["options"]["max-context"]}"""
await context.bot.send_message(chat_id=chat_id, text=settings_message)
if __name__ == '__main__': if __name__ == '__main__':
database.init_database()
try: try:
ALLOWED_USERS=os.environ.get("BOT_ALLOWED_USERS").split(",") ALLOWED_USERS = os.environ.get("BOT_ALLOWED_USERS").split(",")
except: except (Exception):
ALLOWED_USERS=ALLOWED_USERS ALLOWED_USERS = ALLOWED_USERS
print(f"Allowed users: {ALLOWED_USERS}") print(f"Allowed users: {ALLOWED_USERS}")
print(f"System prompt: {SYSTEM_PROMPT}") print(f"System prompt: {SYSTEM_PROMPT}")
print(f"Google TTS: {ENABLE_GOOGLE_TTS}")
application = ApplicationBuilder().token(os.environ.get("BOT_TOKEN")).build() # Register message handler and callback query handler for settings
dp.register_message_handler(settings, commands=['settings'])
start_handler = CommandHandler('start', start) dp.register_callback_query_handler(settings_callback, lambda c: c.data.startswith('setting_'))
application.add_handler(start_handler) executor.start_polling(dp, skip_updates=True)
clear_handler = CommandHandler('clear', clear)
application.add_handler(clear_handler)
info_handler = CommandHandler('usage', usage)
application.add_handler(info_handler)
help_handler = CommandHandler('help', _help)
application.add_handler(help_handler)
imagine_handler = CommandHandler('imagine', imagine)
application.add_handler(imagine_handler)
settings_handler = CommandHandler('settings', settings)
application.add_handler(settings_handler)
application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, chat))
application.add_handler(MessageHandler(filters.ATTACHMENT & ~filters.COMMAND, attachment))
settings_callback_handler = CallbackQueryHandler(settings_callback)
application.add_handler(settings_callback_handler)
application.run_polling()

Wyświetl plik

@ -1,5 +1,6 @@
openai aiogram==2.25.1
python-telegram-bot gTTS==2.3.1
pydub openai==0.27.2
python-dotenv pydub==0.25.1
asyncio python-dotenv==1.0.0
pyttsx3==2.90