AI Dev Conf (Amsterdam 29/8/25)

 
  • Linux foundation > AI myths

    • small business using AI > big
    • seems there’s less AI talent & easier to teach your employees (cuff xD)
    • who is sovereign AI now
      • seems we need open source AI (duh…:-))
      • vendor lock in (there is some state of sovereign AI report @ linuxfoundation.org/research)
  • BAML : vaibhav gupta @ Boundary

    • language for building AI agents
    • db friendly
    • structured data
    • multimodal & trustworthy !!
    • “basically a made up language”
    • burden of failure responsibility is shifted
      • AI models down
    • dude just keeps repeating his name (??)
    • tsx for websites
    • jupyter notebooks > normal python code for deployment
    • prompts are functions
      • they have input/output params
      • some graph to show code
      • exponential fallback
    • plugs into any language
    • oaaa-ml
  • Verena Dittmer @ Canva

    • they have 200 ml engineers
    • aws & snowflake
    • gradio + web
    • ray
    • kubefit & argo
    • VLM for self hosting
    • what do researchers need? (click)
    • research funnel model
    • rapid experimentation is key
      • free choice of GPUs (oof)
      • flexible environment
    • import anything & gradio
    • prototype in a day
    • speed of iteration
    • cli tool = arnold (training cli)
      • submit on cluster
        • arnold submit
  • omni

    • chatgpt belongs to the past
    • worked on gemini
    • companies using more open source models
    • don’t want to send data to openAI
    • steep onboarding curve
    • does everything - democratizing open source AI
    • end to end
      • enterprise version duh
    • companies want to do 100 things but not enough people
    • noble cause to advance AI
  • Jennifer Prendki @ ex deepmind

    • quantum computing
    • always up & down in startups
    • entrepreneurs are ambitious
    • Nvidia powers everything
    • innovation can come from everywhere
    • first useful quantum compuk in 30 years (according to jenn)
      • hired more QC experts
      • proposed by Richard Feynman
      • rn not tangible
    • QC is fast at inference
    • good at simulations
  • ML models roost

    • can’t rollback
    • no versioning
    • no data
    • no shared defs
  • loads of open source stuff for quantum AI

  • connecting tissue

  • very early days & tooling so no “innovation”

  • map of how to get there exists already

  • hybrid btw QC & everything else

  • QM sucks at data redundancy

  • Zhou Yu @ Arklex + info@columbia

  • Agent orchestration layer

  • tool use

  • deploy & iterate faster

  • interactive system

  • history two sessions

  • no best practices for testing & evaluation

  • static benchmarks don’t work

  • human testing or simulated users always surprise you

  • currently no proper testing

  • synthetic users (Io1) - define persona

  • sandbox testing

  • domain knowledge

  • Sayak haul @ Huggingface (hype2.1)

  • State of video generation

  • diffusers

  • SORA : closed API → Apache 2 license

  • open source : LTX, wan2.2, hunyan

  • diff shapes and sizes

  • listen a whole sine (hears a barrel accent)

  • blossy mem to run (w/o optimization)

  • diffusion models → random noise → image

  • condition models with text

  • dm are not single models

  • text → video/image

  • frame interpolation

  • pre to video

  • Shih images together

  • Tav diffuses

  • quantization of loading 112 GB

  • US 47 somers

  • declarative

  • inpainting + video

  • flame pack F1

  • Video structured guidance

  • camp filter + trajectory

  • make video effects finetuned

  • Running AI on browsers

  • everyone just wants a connexion

  • it is vibes

  • llama 3.2, awen2.5, genna 2

  • don’t always need huge models

  • DNA X + web GPU+web assembly

  • what’s new over the years? 2222

  • WebLLM

  • runtime optimization

  • hosted on CON: hugging face

  • Ltbit quantized model

  • cached weights

  • MLC : web GPU+web assembly

  • M on edge (microbios)

  • Embeam

  • Hmyml

  • Micropython

  • modules → mpy

  • 5x -20x y python

  • store to a low level format

  • activity status, noise monitor, juge class

  • embedded linux

  • Michael Jonsson@ IBM research

  • optical bench → ML

  • f-fo-volume

  • large scale benchmarks

  • streaming benchmarks

  • fms-hf-tuning

  • accelerate launch config

  • huge YAML config for kube

  • someone made the file but nobody changed the system

  • one person who documentation sucks when they leave

  • hardware → runtime layer

  • even if documented → problems to re-run

  • pipeline for experiments required

  • pybamhc + SQL + Ray tune + kube like CL1

  • github.com/IBM/sdo

  • grid sampler + fine-tuning actuator

  • RAY works on bare metal clusters

  • no need to rebuild containers

  • isolated virtual env for clusters

  • VLM: roots sometimes

  • reduction in avg time by 10 x (might not be in release yet)

  • reduce no of benchmarks by using predictive models (using old data + system noise)

  • Oleg Selgiov® Docker

  • build agents app!

  • less YOLO+Yect, move exp.

  • A1: carefully crafted grains of sand that can think

  • MCP

  • want: standalone app with access to third party system

  • docker model runner

  • models as OC1 artefacts

  • mcp catalog & mcp toolkit

  • sandbox others’ code

  • no API key on server

  • goose AI + mcp gateway

  • put all this is a single VPN file

  • docker compute up only thing required

  • tryd

  • docker & cloud run

  • The above was OCR’d from images so heres the actual thing