CraftGPT

AppWizard
October 26, 2025
A Minecraft creator named Sammyuri has developed CraftGPT, an in-game language model built on redstone circuits, utilizing nearly 439 million blocks. CraftGPT was trained on a dataset of only 64 tokens and translates player inputs into redstone signals through logic gates and memory circuits.
AppWizard
October 3, 2025
Sammyuri has created CraftGPT, a Minecraft project made of 439 million blocks, featuring a chatbot with 5 million parameters. CraftGPT has a limited vocabulary, six network layers, quantization methods, and a 64-token context window. The structure measures 1,020 by 260 by 1,656 blocks and required a mod for proper rendering. Sammyuri has previously created notable projects, including a working version of Minecraft within the game and a 1Hz CPU named CHUNGUS 2. CraftGPT takes up to two hours to generate a response. Additionally, CreativeMode is an AI-powered platform that generates functional mods for Java Edition, currently capable of creating items, blocks, or mob-related content, with future plans to expand its capabilities.
AppWizard
October 2, 2025
Minecraft YouTuber Sammyuri has created a project called CraftGPT, which is a compact language model with 5,087,280 parameters and a vocabulary of 1,920 tokens, developed using Python and trained on the TinyChat dataset. The project occupies a space of 1,020 x 260 x 1,656 blocks and requires the Distant Horizons mod for proper rendering. CraftGPT often produces off-topic, grammatically incorrect, or nonsensical responses, with a limited context window of 64 tokens. The response time can take up to two hours, even when using the Minecraft High-Performance Redstone Server (MCHPRS) to speed up processing.
AppWizard
October 2, 2025
A YouTuber named sammyuri has created a version of ChatGPT called CraftGPT within Minecraft, which features approximately 5 million parameters and uses the TinyChat dataset for basic English conversations. CraftGPT has a vocabulary limited to 1,920 tokens and operates with a small context window of 64 tokens, leading to responses that can be off-topic, ungrammatical, or nonsensical. The model is slow, taking hours to generate a response even with an accelerated tick rate, and could take up to a decade without a specialized multithreaded server.
Search