4-day event • ₹100k+ prize pool • Hosted by ACM MPSTME
FoolTheLLM Timeline
April 1, 2026
Round 1
Prompt Injection
Extract the Code Word
- •Participants interact with a guarded AI chatbot.
- •Three levels of difficulty: Easy, Medium, Hard.
- •Each participant has a unique hidden code word.
- •Use prompt engineering to trick the AI into revealing it.
- •Top participants advance to the next round.
Round 2
Attack vs Defense
Break or Protect the AI
- •Write guardrails to protect an AI system.
- •Opponents attempt to break those guardrails.
- •Multiple scenarios simulate real-world AI attacks.
- •Top performers advance to the final round.
Round 3
AI Agent Exploit
Force the AI to Act
- •Participants interact with a powerful AI agent.
- •The AI has access to an email tool.
- •Bypass its guardrails and force it to send a victory email.
- •The fastest successful exploit wins.
3
Competition Rounds
100+
Expected Participants
Top 3
Final Winners
Competition Guidelines
Prompt Engineering
Creative and strategic prompt engineering is the key to breaking AI guardrails.
Social Engineering
Round 2 requires social engineering to build strong AI defenses and break opponent guardrails.
AI Safety Awareness
Participants explore real-world vulnerabilities in AI systems and learn defensive design.
Final Exploit
The final challenge tests your ability to manipulate an AI agent to perform restricted actions.
Resources
Design Resources
Learning
Help Center
Frequently Asked Questions
Contact Us
Have questions or concerns? Reach out to us!
Whatsapp Support
Connect with our PR representative on Whatsapp for any kind of support related to the event.
Join Whatsapp→Email Support
Email us for registration issues, technical support, or sponsorship inquiries.
convergence@mpstmeacm.com