AutoML+¶
Modules¶
Route definitions for the AutoML+ service.
analyze_web_accessibility_and_readability(file, url=None, extra_file_input=None)
async
¶
Run WCAG-inspired accessibility checks and optional readability analysis on HTML.
Source code in app/automlplus/router.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | |
check_alt_text(image_url=Form(...), alt_text=Form(...))
async
¶
Evaluate provided alt text against the referenced image using an LLM.
Source code in app/automlplus/router.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | |
image_to_website(image_file=File(default=None))
async
¶
Convert an uploaded image into a basic HTML website structure.
Source code in app/automlplus/router.py
31 32 33 34 35 36 37 38 39 40 41 42 43 | |
run_on_image(prompt=Form(...), model=Form(default=None), image_file=File(default=None), image_url=Form(default=None))
async
¶
Run a vision-language model on an image and return the text output.
Source code in app/automlplus/router.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
run_on_image_stream(prompt='', model=None, image_file=None, image_url=None)
async
¶
Stream a vision-language model's output on an image and prompt.
Source code in app/automlplus/router.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | |
ImageConverter
¶
Convert images to base64 from local paths or URLs.
Source code in app/automlplus/utils.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | |
bytes_to_base64(image_bytes)
staticmethod
¶
Convert raw image bytes to base64 PNG string.
Source code in app/automlplus/utils.py
43 44 45 46 47 48 49 50 51 52 53 | |
extract_text_from_html_bytes(content)
¶
Extract readable text from raw HTML bytes.
Source code in app/automlplus/utils.py
56 57 58 59 60 61 62 63 64 | |
json_safe(data)
¶
Recursively convert string values to JSON-safe strings.
Source code in app/automlplus/utils.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
Tools¶
Static analysis tools for AutoML+.
Static tools derive insights from content using deterministic, rule-based libraries — no LLM calls are made. They are fast, reproducible, and require no API credentials.
Current tools:
ReadabilityAnalyzer— computes textstat readability metrics (Flesch Reading Ease, word counts, sentence length, etc.) over a plain-text string.split_chunks— splits an HTML/text string into fixed-size character chunks while tracking the original 1-based line ranges for each chunk.
ReadabilityAnalyzer
¶
Compute readability metrics for a piece of text.
Source code in app/automlplus/tools/static.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | |
split_chunks(content, chunk_size)
¶
Split content into fixed-size character chunks and return 1-based (start_line, end_line) ranges for each chunk.
Line ranges are accurate even when chunks start/end mid-line.
Source code in app/automlplus/tools/static.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | |
LLM-over-text tools for AutoML+.
Text tools send plain-text content (HTML chunks, documents, etc.) to a language model and parse the structured response. Unlike VLM tools, no image input is required; unlike static tools, they rely on an external LLM API.
Current tools:
ChunkResult— dataclass holding the outcome (score, image feedback, LLM response, or error) for a single processed text chunk._process_single_chunk— sends one HTML chunk to the LLM for WCAG analysis, extracts a numeric score from the response, and runsAltTextCheckeron any<img>tags found in the chunk.
ChunkResult
dataclass
¶
Result for processing a single chunk of an HTML file.
Source code in app/automlplus/tools/text.py
30 31 32 33 34 35 36 37 38 39 40 | |
VLM (Vision Language Model) tools for AutoML+.
A VLM task involves passing one or more images together with a text prompt to a multimodal language model and processing its response. The classes here cover two use-cases:
ImagePromptRunner— general-purpose: run or stream any user-supplied prompt over an image (file upload or URL).AltTextChecker— specialised: evaluate whether provided alt text accurately describes an image, using a structured VLM prompt defined in Jinja2 templates.
AltTextChecker
¶
Check whether provided alt text matches an image using a VLM.
Source code in app/automlplus/tools/vlm.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | |
ImagePromptRunner
¶
Run a VLM on an image and user-provided prompt.
Source code in app/automlplus/tools/vlm.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | |
run_stream(image_bytes=None, image_path_or_url=None, prompt='', model=None, jinja_environment=None)
staticmethod
¶
Stream VLM output for an image+prompt interaction. Yields incremental text chunks.
Source code in app/automlplus/tools/vlm.py
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | |
Website Accessibility¶
Orchestration pipeline for web accessibility analysis.
This module coordinates the full accessibility analysis workflow: it splits an
HTML document into chunks, fans out concurrent LLM-over-text analysis via
_process_single_chunk, and aggregates results. It is intentionally thin —
all tool logic lives in app.automlplus.tools.
run_accessibility_pipeline— main entry point; returns a list ofChunkResultobjects, one per chunk.resolve_coroutines— utility to recursively await coroutine-valued attributes when serialising results.stream_accessibility_results— streams the resolved results as a single JSON array (used for streaming response endpoints).
resolve_coroutines(obj)
async
¶
Recursively await any coroutine attributes in an object.
Source code in app/automlplus/website_accessibility/pipeline.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | |
run_accessibility_pipeline(content, filename, jinja_environment, chunk_size, concurrency=4, context='')
async
¶
Split HTML into chunks and process them concurrently with a semaphore.
Source code in app/automlplus/website_accessibility/pipeline.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | |
stream_accessibility_results(results)
async
¶
Stream results as a single JSON array instead of JSONL.
Source code in app/automlplus/website_accessibility/pipeline.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | |