Hosted by Software Mansion
28 March 2026
Zabłocie 43B, Kraków
Thank you for joining us.
Make sure to not miss our next events – follow us on:
Hosted by Software Mansion
28 March 2026
Zabłocie 43B, Kraków
Thank you for joining us.
Make sure to not miss our next events – follow us on:
applications
participants
teams formed
working projects submitted
hours build time
The smartest AI systems won't choose between cloud and edge – they'll use both.
On-device models have gotten fast and capable, but they still can't match the cloud for complex reasoning. This track challenges you to build multi-layered systems that think locally and reason globally.
- React Native ExecuTorch for on-device inference
- Gemma 3
- Gemini API
Gemma filters sensitive data from documents locally; only safe content reaches Gemini for analysis.
Local RAG retrieves your photos, files, and calendar on-device, Gemini synthesizes personalized insights on top.
Thousands of log lines would burn through cloud tokens. Gemma extracts the critical 50 lines locally, Gemini diagnoses the root cause.
We're past the "type and wait" era. AI can now see, hear, and respond.
Build systems where AI responds to live media or transforms it in real time. Think assistants that watch and guide, or broadcast tools that composite, translate, and augment video streams on the fly. Multi-user scenarios are especially welcome.
A hands-free guide that watches you fix a circuit board via your phone camera, providing voice instructions and correcting your movements in real time.
Multiple students share screens with an AI tutor that sees everyone's work simultaneously and facilitates discussion.
Stream a cooking show where AI generates ingredient lists, timers, and nutritional overlays as you cook – composed into the video in real time.
You probably aren't a game developer. That only makes it more fun.
Game development used to require years of specialized knowledge. With tools like Google's Antigravity and real-time AI APIs, the barrier has collapsed.
This track is your invitation to build something playable. AI can be the core mechanic, the content engine, or the dungeon master running the show behind the scenes. We're looking for experimental, surprising, and genuinely fun experiences.
– Antigravity
– Gemini Live API
– Fishjam
– Smelter
- TypeGPU
– MediaPipe
A multiplayer tech demo using MediaPipe face-tracking and Smelter's real-time video composition—players become pufferfish, controlled by puckering their lips at the camera.
A conversational mystery game where multiple players interact with a shared AI voice agent powered by Gemini Live API and Fishjam.
An adversarial prompting game where players compete to convince an AI guardian to break its core directive and release a prize pool.
1st place

Kamil Stanuch
An analyst, angel investor, former head of William Hill’s tech center in Poland, and CEO/founder of several tech companies, including KoalaMetrics and Sigmapoint. Currently focused on burning through tokens while building products around AI agents, automation, and data, and shares insights in his newsletter.

Piotr Skalski
Lead Open Source Engineer at Roboflow and a prominent open-source creator known for making computer vision easier for everyone. He is a dedicated educator who builds popular tools and tutorials that help thousands of developers turn complex AI research into practical projects.

Krzysztof Magiera
Co-founder of Software Mansion and a former Facebook engineer, where he worked on the React Native Core team. He is the architect behind key libraries like React Native Gesture Handler and Reanimated – tools that power the apps on your phone every day.

Dr Maria Eckes
An Assistant Professor at AGH specializing in the intersection of AI and UX. Maria brings an academic rigor to the jury, focusing specifically on usability and functional maturity. She’s here to ensure that the tools built with Gemini aren't just technically impressive, but also clear, usable, and well-architected from a product perspective.

Amit Vadi
As the Head of Community for Gemini, Amit bridges the gap between Google’s internal product teams and the external developer world. His work is centered on fostering a technical environment where builders can thrive, providing the resources and platform needed to turn experimental AI features into functional software.

Prince Canuma
An ML Research Engineer and open-source creator behind mlx-vlm, mlx-lm and mlx-audio libraries with millions of downloads that let developers run multimodal AI models locally on Apple Silicon. He is one of the most prolific contributors to the MLX ecosystem, building the infrastructure that teams at LM Studio, LiquidAI, Hugging Face, and Baidu rely on to ship on-device AI without cloud dependency.