Understand AI Capabilities from a Dynamic Perspective
To some extent, you can think of it as a technically strong technical designer or client-side developer colleague who lacks judgment about game aesthetics and player experience. Current frontier models have already shown abilities on concrete, explicit programming tasks and mathematical problem solving that exceed those of most programmers. However, as language models, they still lack a real understanding of Game Feel, and they cannot naturally judge which UI layout is more reasonable or which interaction feedback better matches player expectations. In short-term projects, teams can finish prototyping and iteration quickly without fully understanding the implementation details, which is what people often callVibe Coding.
In serious long-term projects, developers should review the technical decisions made by AI and understand the implementation details. Only then can the team maintain and fix the project continuously when problems appear. Without a basic understanding of how it works, later troubleshooting and repair will depend heavily on repeated trial and error. Once AI can no longer continue the repair, both maintainability and continuity will be directly affected.
For large projects, we recommend that programmers use the Agent mainly for framework-level development, while non-technical designers build concrete gameplay features through SKILLs or specific restrictions prepared in advance by programmers. Designers should avoid directly requesting framework-level engineering changes. For example, a request such as “I want the player to leave footprints in the snow” may directly lead the Agent to write a change involving underlying architecture design, such as a virtual texturing system.
At the same time, the capabilities of base models are still evolving rapidly. The release cycle of frontier models has shortened from once a year or every six months to around every two months. Correspondingly, the usage paradigm of Agents will also keep changing. More than a year ago, limited context length and tool capability still made it difficult for models to independently complete the Agent tasks that are common today. For this reason, the guidance on this page is stage-specific and may need to be adjusted again within the next few months.
At the current stage, it is normal for AI to make mistakes during development. You can test in Unity and feed the results back so it can continue correcting them. Many issues can be resolved after several rounds of iteration. If one issue remains unresolved for a long time, it is usually more effective to start a new conversation and let a new AI analyze and troubleshoot on top of the existing implementation.
Later, we will publish two technical blogs to explain our understanding of how to use Agents for game development in a more systematic way:
- Harness Engineering for Game Projects
- Building Dev Agent for Game Engines
Specific and Correct Instructions Are Better Than Vague Instructions, and Vague Instructions Are Better Than Specific but Wrong Instructions
You should provide the AI with sufficient and accurate contextual information whenever possible so it can clearly understand the target of the request. For example:- When you want to implement a feature, explain what the feature is for and the expected design details.
- When you want to implement a gameplay mechanic, describe the experience it should create, possible references, and the key design details.
- When you want to modify an existing feature in the project, explain its current state, why it should change, and the possible direction you have in mind.
Complex Tasks Should Produce a Plan and Complete the Review Before Execution
Many game mechanics that look simple on the surface are often very complex to implement in practice. Game designers and gameplay programmers are usually both familiar with this. Take the “jump” feature in a side-scrolling platformer as an example. Behind it there may be many complex technical decisions, such as whether the character action logic should be maintained through a state machine or a state tree, whether to include coyote time, and how to design the speed-change parameters during takeoff and landing to shape the curve. In a traditional development workflow, client programmers usually need to communicate with designers repeatedly, fill in the implicit requirements that were not explicitly written into the design document, and turn them into code. AI in its default state is more inclined to satisfy the surface request as quickly as possible. If you ask it to implement a feature directly with a short sentence, AI cannot grasp the project’s basic design goals and long-term evolution direction, and the final result is very likely to deviate from expectations. For this reason, for complex requirements, it is better to first require AI not to modify any files and instead output an implementation plan. After you fully confirm the requirements, boundary conditions, and key design decisions through conversation, then move into the actual coding phase. Although AI cannot truly understand Game Feel, it is familiar with a large number of typical implementations from existing games. For content that already has mature patterns, you can also proactively ask it “how to improve the feel.” For example, a developer without much platformer experience may not naturally think of ideas such as coyote time, while AI can often provide suggestions that are worth using as references. On the other hand, in the past, when designers wanted to push a feature into production, they often needed to communicate fully with programmers because development costs were high. Today, implementation and iteration speed has increased significantly, and designers can validate ideas more quickly at lower cost. As a result, in many scenarios, designers no longer need to spend as much time polishing a complete solution and persuading programmers to invest before trying it out. They can move into prototype validation and gameplay iteration much faster. The Locus feature that summarizes design documents from conversations was also created from this idea.Maintain the Knowledge Base Regularly
We provide Locus with the ability to maintain the knowledge base automatically because we want it to gradually build a deeper understanding of the project while continuously talking with you and participating in the project. Among them, theDesign section deserves the most attention. It represents AI’s summary and abstraction of your requirements, and it will treat these contents as the requirement basis in later tasks. Keeping the information in it accurate is an important responsibility for the user.
Memory is experiential information summarized by AI itself from the project and its context. You can also review whether the contents are accurate on a regular basis and manually modify or delete related entries according to the actual situation.
Later, we will provide a dedicated Skill to help AI maintain the project knowledge base more systematically.