Thursday, December 11, 2025

New top story on Hacker News: Show HN: SIM – Apache-2.0 n8n alternative

Show HN: SIM – Apache-2.0 n8n alternative
8 by waleedlatif1 | 0 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/ArQuSnb . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/ueXsOSz [2] https://ift.tt/Z05QCJa

Wednesday, December 10, 2025

New top story on Hacker News: Show HN: Automated license plate reader coverage in the USA

Show HN: Automated license plate reader coverage in the USA
19 by sodality2 | 3 comments on Hacker News.
Built this over the last few days, based on a Rust codebase that parses the latest ALPR reports from OpenStreetMaps, calculates navigation statistics from every tagged residential building to nearby amenities, and tests each route for intersection with those ALPR cameras (Flock being the most widespread). These have gotten more controversial in recent months, due to their indiscriminate large scale data collection, with 404 Media publishing many original pieces ( https://ift.tt/uhRbt2a ) about their adoption and (ab)use across the country. I wanted to use open source datasets to track the rapid expansion, especially per-county, as this data can be crucial for 'deflock' movements to petition counties and city governments to ban and remove them. In some counties, the tracking becomes so widespread that most people can't go anywhere without being photographed. This includes possibly sensitive areas, like places of worship and medical facilities. The argument for their legality rests upon the notion that these cameras are equivalent to 'mere observation', but the enormous scope and data sharing agreements in place to share and access millions of records without warrants blurs the lines of the fourth amendment.

New top story on Hacker News: Israel used Palantir technologies in pager attack in Lebanon

Israel used Palantir technologies in pager attack in Lebanon
164 by cramsession | 89 comments on Hacker News.


Friday, October 17, 2025

New top story on Hacker News: Show HN: We packaged an MCP server inside Chromium

Show HN: We packaged an MCP server inside Chromium
7 by felarof | 1 comments on Hacker News.
Hey HN, we just shipped a browser with an inbuilt MCP server! We're a YC startup (S24) building BrowserOS — an open‑source Chromium fork. We're a privacy‑first alternative to the new wave of AI browsers like Dia, Perplexity Comet. Since launching ~3 months ago, the #1 request has been to expose our browser as an MCP server. -- Google beat us to launch with chrome-devtools-mcp (solid product btw), which lets you build/debug web apps by connecting Chrome to coding assistants. But we wanted to take this a step further: we packaged the MCP server directly into our browser binary. That gives three advantages: 1. MCP server setup is super simple — no npx install, no starting Chrome with CDP flags, you just download the BrowserOS binary. 2. with our browser's inbuilt MCP server, AI agents can interact using your logged‑in sessions (unlike chrome-devtools-mcp which starts a fresh headless instance each time) 3. our MCP server also exposes new APIs from Chromium's C++ core to click, type, and draw bounding boxes on a webpage. Our APIs are also not CDP-based (Chrome Debug Protocol) and have robust anti-bot detection. -- Few example use cases for BrowserOS-mcp are: a) *Frontend development with Claude Code*: instead of screenshot‑pasting, claude-code gets WYSIWYG access. It can write code, take a screenshot, check console logs, and fix issues in one agentic sweep. Since it has your sessions, it can do QA stuff like "test the auth flow with my Google Sign‑In." Here's a video of claude-code using browserOS to improve the css styling with back-and-forth checking: https://youtu.be/vcSxzIIkg_0 b) *Use as an agentic browser:* You can install BrowserOS-mcp in claude-code or Claude Desktop and do things like form-filling, extraction, multi-step agentic tasks, etc. It honestly works better than Perplexity Comet! Here's a video of claude-code opening top 5 hacker news posts and summarizing: https://youtu.be/rPFx_Btajj0 -- *How we packaged MCP server inside Chromium binary*: We package the server as a Bun binary and expose MCP tools over HTTP instead of stdio (to support multiple sessions). And we have a BrowserOS controller installed as an extension at the application layer which the MCP server connects to over WebSocket to control the browser. Here's a rough architecture diagram: https://dub.sh/browseros-mcp-diag -- *How to install and use it:* We put together a short guide here: https://ift.tt/7ITN2Di Our vision is to reimagine the browser as an operating system for AI agents, and packaging an MCP server directly into it is a big unlock for that! I'll be hanging around all day, would love to get your feedback and answer any questions!

New top story on Hacker News: Forgejo v13.0 Is Available

Forgejo v13.0 Is Available
3 by birdculture | 0 comments on Hacker News.


Thursday, October 16, 2025

New top story on Hacker News: Show HN: How Useless Are You? A brutally honest skills check

Show HN: How Useless Are You? A brutally honest skills check
10 by mraspuzzi | 9 comments on Hacker News.
We built this to answer "am I a fit for this role?" after noticing how hard it is to get honest feedback when applying to a YC startup or something else entirely. It's a custom 5-minute challenge that roasts you after. Added a leaderboard for those who want to see how they stack up. Roast us below.

Sunday, October 12, 2025

New top story on Hacker News: 'Death to Spotify': the DIY movement to get artists and fans to quit the app

'Death to Spotify': the DIY movement to get artists and fans to quit the app
50 by mitchbob | 28 comments on Hacker News.


New top story on Hacker News: Show HN: I built a simple ambient sound app with no ads or subscriptions

Show HN: I built a simple ambient sound app with no ads or subscriptions
6 by alpaca121 | 0 comments on Hacker News.
I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/mf5dZNG...

Friday, October 10, 2025

New top story on Hacker News: Toyota aims to launch the ' first' all-solid-state EV batteries

Toyota aims to launch the ' first' all-solid-state EV batteries
11 by thelastgallon | 0 comments on Hacker News.


New top story on Hacker News: Show HN: Modeling the Human Body in Rust So I Can Cmd+Click Through It

Show HN: Modeling the Human Body in Rust So I Can Cmd+Click Through It
25 by lleong1618 | 17 comments on Hacker News.
I started this trying to understand two things: why my Asian friends turn red after drinking, and why several friends all seemed to have migraine clusters. I was reading medical papers and textbooks, but kept getting lost jumping between topics. I thought: what if I could just Cmd+Click through this like code? What if "ALDH2 gene" was actually clickable, and took me to the variant, the phenotype, the population frequencies? So I started modeling human biology in Rust with my Ralph agent (Claude in a loop, ty ghuntley). Turns out the type system is perfect for this. Every biological entity is strongly-typed with relationships enforced at compile time. After 1 day of agent coding: - 277 Rust files, ~95k lines of code - 1,561 tests passing - 13 complete organ systems - Genetics with ancestry-specific variants - Clinical pathology models Try it: git clone https://ift.tt/y3j4NZ2 cd open_human_ontology cargo run --example ide_navigation_demo Then open `examples/ide_navigation_demo.rs` and Cmd+Click through: Understanding Asian flush: AsianGeneticVariantsCatalog::get_metabolic_variants() // Click through to: // → ALDH2 gene on chromosome 12q24.12 // → rs671 variant (Glu504Lys) // → 40% frequency in Japanese population // → Alcohol flush reaction // → 10x esophageal cancer risk with alcohol // → Acetaldehyde metabolism pathway Understanding migraines: Migraine { subtype: WithAura, triggers: [Stress, LackOfSleep, HormonalChanges], genetic_variants: ["rs2075968", "rs1835740"], ... } // Click through to: // → 17 migraine trigger types // → 12 aura symptom types // → Genetic risk factors // → Why clusters happen (HormonalChanges → Menstruation) Now I can actually navigate the connections instead of flipping through PDFs. Heart → CoronaryArtery → Plaque. VisualCortex → 200M neurons → NeuralConnection pathways. It's like Wikipedia but type-checked and with jump-to-definition. This isn't production medical software - it's a learning tool. But it's way more useful than textbooks for understanding how biological systems connect. The agent keeps expanding it. Sometimes it OOMs but that's part of the fun. Tech: Rust, nalgebra, serde, rayon, proptest I am not a dr or medical professional this is for my education you can commit to it if you want to or review and open some PR's if you find wrong information or want to add references.

New top story on Hacker News: Illegible Nature of Software Development Talent

Illegible Nature of Software Development Talent
16 by hackthemack | 11 comments on Hacker News.


Wednesday, October 8, 2025

New top story on Hacker News: Show HN: I built a local-first podcast app

Show HN: I built a local-first podcast app
10 by aegrumet | 4 comments on Hacker News.
I worked on early podcast software in 2004 (iPodder/Juice) and have been a heavy podcast consumer ever since. I wanted a podcast app that respects your privacy and embraces the open web—and to explore what's possible in the browser. The result is wherever.audio, which you can try right now at the link above. How it works: It's a progressive web app that stores all your subscriptions and data locally in your browser using IndexedDB. Add it to your home screen and it feels native. Works offline with downloaded episodes. No central server storing your data—just some Cloudflare/AWS helpers to smooth out browser limitations. What makes it different: - True local-first: Your data stays on your device - Custom feeds: Add any RSS feed, not just what's in a directory - On-device search: Search across all feeds and episodes, including your custom ones - Podcasting 2.0 support: Chapters, transcripts, funding tags, and others - Auto-generated chapters: For popular shows that don't have them - AI-powered discovery: Ask questions to find shows and episodes (this feature does send queries to a 3rd party API, and also uses anonymized analytics while we work out the prompts) - Audio-guided tutorials: Interactive walkthroughs with voice guidance and visual cues The basics work well too: Standard playback features, queue management, speed controls, etc. I'm really interested in feedback—this is more passion project than business right now. I've been dogfooding it as my daily podcast app for over a year, and I'm open to exploring making it a business if people find it valuable. Curious if there are unmet needs that a privacy-focused, open web approach could address.

New top story on Hacker News: A 9KB (3KB gzip) single HTML notebook, perfect for minimalists

A 9KB (3KB gzip) single HTML notebook, perfect for minimalists
7 by chunqiuyiyu | 2 comments on Hacker News.


Sunday, October 5, 2025

New top story on Hacker News: What GPT-OSS leaks about OpenAI's training data

What GPT-OSS leaks about OpenAI's training data
40 by fi-le | 1 comments on Hacker News.


New top story on Hacker News: Callbacks in C++ Using Template Functors – Rich Hickey (1994)

Callbacks in C++ Using Template Functors – Rich Hickey (1994)
12 by zengid | 4 comments on Hacker News.


New top story on Hacker News: Show HN: ut – Rust based CLI utilities for devs and IT

Show HN: ut – Rust based CLI utilities for devs and IT
7 by ksdme9 | 2 comments on Hacker News.
Hey HN, I find myself reaching for tools like it-tools.tech or other random sites every now and then during development or debugging. So, I built a toolkit with a sane and simple CLI interface for most of those tools. For the curious and lazy, at the moment, ut has tools for, - Encoding: base64 (encode, decode), url (encode, decode) - Hashing: md5, sha1, sha224, sha256, sha384, sha512 - Data Generation: uuid (v1, v3, v4, v5), token, lorem, random - Text Processing: case (lower, upper, camel, title, constant, header, sentence, snake), pretty-print, diff - Development Tools: calc, json (builder), regex, datetime - Web & Network: http (status), serve, qr - Color & Design: color (convert) - Reference: unicode For full disclosure, parts of the toolkit were built with Claude Code (I wanted to use this as an opportunity to play with it more). Feel free to open feature requests and/or contribute.

Saturday, October 4, 2025

New top story on Hacker News: Why NetNewsWire Is Not a Web App

Why NetNewsWire Is Not a Web App
4 by frizlab | 1 comments on Hacker News.


New top story on Hacker News: Show HN: Run – a CLI universal code runner I built while learning Rust

Show HN: Run – a CLI universal code runner I built while learning Rust
11 by esubaalew | 4 comments on Hacker News.
Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/mrIAzhb I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes.

New top story on Hacker News: Use theorem provers to ensure the correctness of your LLM's reasoning

Use theorem provers to ensure the correctness of your LLM's reasoning
9 by barthelomew | 0 comments on Hacker News.


New top story on Hacker News: Knowledge Infusion Scaling Law for Pre-Training Large Language Models

Knowledge Infusion Scaling Law for Pre-Training Large Language Models
15 by PaulHoule | 1 comments on Hacker News.


Monday, September 15, 2025

New top story on Hacker News: Show HN: MCP Server Installation Instructions Generator

Show HN: MCP Server Installation Instructions Generator
7 by pmig | 0 comments on Hacker News.
Hey HN, we’ve been experimenting a lot with MCP servers lately, and one of the most time-consuming challenges has been connecting MCP clients to remote MCP servers. To solve this, we built a library that generates them on the fly, enabling 1-click installation buttons and links for most clients out there. Feel free to try out the generator and use it to improve the README of your remote MCP server with the generated markdown. You can even configure the library to return HTML instructions if someone accesses your remote MCP server via the web.

New top story on Hacker News: Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers

Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
10 by Aherontas | 2 comments on Hacker News.
Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems. To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://ift.tt/i4P5nZh ) https://ift.tt/KICv5HM... The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration. Features: - Multiple agents running in containers - MCP servers (Brave search, GitHub, filesystem, etc.) as tools - A2A communication between services - Minimal UI for experimentation for Tech Trend - repo analysis I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases. It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions: Do you think agent-to-agent protocols like MCP/A2A will stick? Or will the future be mostly single powerful LLMs with plugin stacks? Thanks — excited to hear what the HN crowd thinks!

Wednesday, September 10, 2025

New top story on Hacker News: Launch HN: Recall.ai (YC W20) – API for meeting recordings and transcripts

Launch HN: Recall.ai (YC W20) – API for meeting recordings and transcripts
13 by davidgu | 3 comments on Hacker News.
Hey HN, we're David and Amanda from Recall.ai ( https://www.recall.ai ). Today we’re launching our Desktop Recording SDK, a way to get meeting data without a bot in the meeting: https://ift.tt/r69hZCB . It’s our biggest release in quite a while so we thought we’d finally do our Launch HN :) Here’s a demo that shows it producing a transcript from a meeting, followed by examples in code: https://www.youtube.com/watch?v=4croAGGiKTA . API docs are at https://docs.recall.ai/ . Back in W20, our first product was an API that lets you send a bot participant into a meeting. This gives developers access to audio/video streams and other data in the meeting. Today, this API powers most of the meeting recording products on the market. Recently, meeting recording through a desktop form factor instead of a bot has become popular. Many products like Notion and ChatGPT have added desktop recording functionality, and LLMs have made it easier to work with unstructured transcripts. But it’s actually hard to reliably record meetings at scale with a desktop app, and most developers who want to add recording functionality don’t want to build all this infrastructure. Doing a basic recording with just the microphone and system audio is fairly straightforward since you can just use the system APIs. But it gets a lot harder when you want to capture speaker names, produce a video recording, get real-time data, or run this in production at large scale: - Capturing speaker names involves using accessibility APIs to screen-scrape the video conference window to monitor who is speaking at what time. When video conferencing platforms change their UI, we must ship a change immediately, so this keeps working. - Producing a video recording that is clean, and doesn’t capture the video conferencing platform UI involves detecting the participant tiles, cropping them out, and compositing them together into a clean video recording. - Because the desktop recording code runs on end-user machines, we need to make it as efficient as possible. This means writing highly platform-optimized code, taking advantage of hardware encoders when available, and spending a lot of time doing profiling and performance testing. Meeting recording has zero margin for failure because if anything breaks, you lose the data forever. Reliability is especially important, which dramatically increases the amount of engineering effort required. Our Desktop Recording SDK takes care of all this and lets developers build meeting recording features into their desktop apps, so they can record both video conferences and in-person meetings without a bot. We built Recall.ai because we experienced this problem ourselves. At our first startup, we built a tool for product managers that included a meeting recording feature. 70% of our engineering time was taken up by just this feature! We ended up starting Recall.ai to solve this instead. Since then, over 2000 companies use us to power their recording features, e.g. Hubspot for sales call recording, Clickup for their AI note taker. Our users are engineering teams building commercial products for financial services, telehealth, incident management, sales, interviewing, and more. We also power internal tooling for large enterprises. Running this sort of infrastructure has led to unexpected technical challenges! For example, we had to debug a 1 in 36 million segfault in our audio encoder ( https://ift.tt/iWZcm7P... ), we encountered a Postgres lock-up that only occurs when you have tens of thousands of concurrent writers ( https://ift.tt/V7pQABN ), and we saved over $1M a year on AWS by optimizing the way we shuffle data around between our processes ( https://ift.tt/rjna9Gi ). You can try it here: https://www.recall.ai . It's self-serve with $5 of free credits. Pricing starts at $0.70 for every hour of recording, prorated to the second. We offer volume discounts with scale. All data recorded through Recall.ai is the property of our customers, we support 0-day retention, and we don’t train models on customer data. We would love your feedback!

Wednesday, September 3, 2025

New top story on Hacker News: Another YC company was acquihired today by OpenAI

Another YC company was acquihired today by OpenAI
15 by liurenju | 6 comments on Hacker News.


New top story on Hacker News: Show HN: Entropy-Guided Loop – How to make small models reason

Show HN: Entropy-Guided Loop – How to make small models reason
11 by andrewmonostate | 0 comments on Hacker News.
TLDR: A small, vendor-agnostic inference loop that turns token logprobs/perplexity/entropy into an extra pass and reasoning for LLMs. - Captures logprobs/top-k during generation, computes perplexity and token-level entropy. - Triggers at most one refine when simple thresholds fire; passes a compact “uncertainty report” (uncertain tokens + top-k alts + local context) back to the model. - In our tests on technical Q&A / math / code, a small model recovered much of “reasoning” quality at ~⅓ the cost while refining ~⅓ of outputs. I kept seeing “reasoning” models behave like expensive black boxes. Meanwhile, standard inference already computes useful signals both before softmax normalization and after it(logprobs), which we usually throw away. This loop tries the simplest thing that you could think of: use those signals to decide when (and where) to think again. GitHub (notebook + minimal code): https://ift.tt/v2pLKoZ Paper (short & engineer made): https://ift.tt/ru7BVew Blog (more context): https://ift.tt/iPx1kMp Requirements: Python, API that exposes logprobs (tested with OpenAI non reasoning 4.1). OPENAI_API_KEY and WEAVE for observability. Run the notebook; it prints metrics and shows which tokens triggered refinement. - Python, simple loop (no retraining). - Uses Responses API logprobs/top-k; metrics: perplexity, max token entropy, low-confidence counts. - Weave for lightweight logging/observability (optional). - Passing alternatives (not just “this looks uncertain”) prevents over-correction. - A simple OR rule (ppl / max-entropy / low-confidence count) catches complementary failure modes. - Numbers drift across vendors; keeping the method vendor-agnostic is better than chasing fragile pairings. - Needs APIs that expose logprobs/top-k. - Results are indicative—not a leaderboard; focus is on within-model gains (single-pass vs +loop). - Thresholds might need light tuning per domain. - One pass only; not a chain-of-thought replacement. - Run it on your models and ideas (e.g., 4o-mini, v3, Llama variants with logprobs) and share logs in a PR for our README in GitHub if you'd like, PRs welcome - I’ll credit and link. Overall let me know if you find making small models reason like this useful!

New top story on Hacker News: Vector search on our codebase transformed our SDLC automation

Vector search on our codebase transformed our SDLC automation
17 by antonybrahin | 2 comments on Hacker News.
Hey HN, In software development, the process of turning a user story into detailed documentation and actionable tasks is critical. However, this manual process can often be a source of inconsistency and a significant time investment. I was driven to see if I could streamline and elevate it. I know this is a hot space, with big players like GitHub and Atlassian building integrated AI, and startups offering specialized platforms. My goal wasn't to compete with them, but to see what was possible by building a custom, "glass box" solution using the best tools for each part of the job, without being locked into a single ecosystem. What makes this approach different is the flexibility and full control. Instead of a pre-packaged product, this is a resilient workflow built on Power Automate, which acts as the orchestrator for a sequence of API calls: Five calls to the Gemini API for the core generation steps (requirements, tech spec, test strategy, etc.). One call to an Azure OpenAI model to create vector embeddings of our codebase. One call to Azure AI Search to perform the Retrieval-Augmented Generation (RAG). This was the key to getting context-aware, non-generic outputs. It reads our actual code to inform the technical spec and tasks. A bunch of direct calls to the Azure DevOps REST API (using a PAT) to create the wiki pages and work items, since the standard connectors were a bit limited. The biggest challenge was moving beyond simple prompts and engineering a resilient system. Forcing the final output into a rigid JSON schema instead of parsing text was a game-changer for reliability. The result is a system that saves us hours on every story and produces remarkably consistent, high-quality documentation and tasks. The full write-up with all the challenges, final prompts, and screenshots is in the linked blog post. I’m here to answer any questions. Would love to hear your feedback and ideas!

Tuesday, August 26, 2025

New top story on Hacker News: Titles Matter

Titles Matter
5 by speckx | 0 comments on Hacker News.


New top story on Hacker News: Show HN: SecretMemoryLocker – File Encryption Without Static Passwords

Show HN: SecretMemoryLocker – File Encryption Without Static Passwords
6 by YuriiDev | 0 comments on Hacker News.
I built SecretMemoryLocker ( https://ift.tt/oXjtLJs ), a file encryption tool that generates keys dynamically from your answers to personal questions instead of using a static master password. This makes offline brute-force attacks much more difficult. Think of it as a password manager that meets mnemonic seed recovery, but without storing any sensitive keys on disk. Why? I kept losing master passwords and wanted a solution that wasn't tied to a single point of failure. I also wanted to create a "digital legacy" that my family could access only under specific conditions. The core principle is knowledge-based encryption: the key only exists in memory when you provide the correct answers. Status: * MVP is ready for Windows (.exe). * Linux and macOS support is planned. * UI is available in English, Spanish, and Ukrainian. Key Features: * No Static Secrets: No master password or seed phrase is ever stored. The key is reconstructed on the fly. * Knowledge-Based Key Generation: The final encryption key is derived from a combination of your personal answers and file metadata. * Offline Brute-Force Resistance: Uses MirageLoop, a decoy system that activates when incorrect answers are entered. Instead of decrypting real data, it generates an endless sequence of AI-created questions from a secure local database, creating an illusion of progress while keeping your real data untouched. * Offline AI Generation Mode: Optional offline Q&A generator (prototype). How It Works (Simplified): 1) Files are packed into an AES-256 encrypted ZIP archive. 2) A JSON key file stores the questions in an encrypted chain. Each subsequent question is encrypted with a key derived from the previous correct answer and the file's hash. This forces you to answer them sequentially. 3) The final encryption key for the ZIP file is derived by combining the hashes of all your correct answers. The key derivation formula looks like this: K_final = SHA256(H(answer1+file_hash) + H(answer2+file_hash) + ...) (Note: We are aware that a fast hash like SHA256 is not ideal for a KDF. We plan to migrate to Argon2 in a future release to further strengthen resistance against brute-force attacks.) To encrypt, you provide a file. This creates two outputs: your_file.txt → your_file_SMLkey.json + your_file_SecretML.zip To decrypt, you need both files and the correct answers. Install & Quick Start: Download the EXE from GitHub Releases (no dependencies needed): https://ift.tt/ZTQUgxB Encrypt: SecretMemoryLocker.exe --encrypt "C:\docs\important.pdf" Decrypt: SecretMemoryLocker.exe --decrypt "C:\docs\important_SMLkey.json" I would love to get your feedback on the concept, the user experience, and any security assumptions I've made. Thanks!

Thursday, August 21, 2025

New top story on Hacker News: Show HN: Tool shows UK properties matching group commute/time preferences

Show HN: Tool shows UK properties matching group commute/time preferences
4 by fryingdan | 2 comments on Hacker News.
I came up with this idea when I was looking to move to London with a friend. I quickly learned how frustrating it is to trial-and-error housing options for days on end, just to be denied after days of searching due to some grotesque counteroffer. To add to this, finding properties that meet the budgets, commuting preferences and work locations of everyone in a group is a Sisyphean task - it often ends in failure, with somebody exceeding their original budget or somebody dropping out. To solve this I built a tool ( https://closemove.com/ ) that: - lets you enter between 1-6 people’s workplaces, budgets, and maximum commute times - filters public rental listings and only shows the ones that satisfy everyone’s constraints - shows results in either a list or map view No sign-up/validation required at present. Currently UK only, but please let me know if you'd want me to expand this to your city/country. This currently works best in London (with walking, cycling, driving and public transport links connected), and works decently in the rest of the UK (walking, cycling, driving only). This started as a side project and it still needs improvement. I’d appreciate any feedback!

Sunday, August 17, 2025

New top story on Hacker News: AI Doesn't Lighten the Burden of Mastery; AI Makes It Easy to Stop Valuing It

AI Doesn't Lighten the Burden of Mastery; AI Makes It Easy to Stop Valuing It
28 by gwynforthewyn | 10 comments on Hacker News.


New top story on Hacker News: Show HN: NextDNS Adds "Bypass Age Verification"

Show HN: NextDNS Adds "Bypass Age Verification"
20 by nextdns | 3 comments on Hacker News.
We just shipped a new feature in NextDNS: Bypass Age Verification. More and more sites (especially adult ones) are now forcing users to upload IDs or selfies to continue. We think that’s a terrible idea: handing over government documents to random sites is a huge privacy risk. This new setting workarounds those verification flows via DNS tricks. It’s available today to all users, including free accounts. We’re curious how the HN community feels about this. Is it the right way to protect privacy online, or will it just provoke regulators to push harder? https://nextdns.io

Thursday, August 14, 2025

New top story on Hacker News: Show HN: Modelence – Supabase for MongoDB

Show HN: Modelence – Supabase for MongoDB
13 by artahian | 0 comments on Hacker News.
Hi all, Aram and Eduard here - authors of Modelence ( https://ift.tt/gOuKL6H ), an all-in-one backend platform for teams that love TypeScript + MongoDB. Think Supabase, but for MongoDB: auth, cron jobs, email, monitoring, without glue code before you can ship. As Karpathy (and many of us) noted, getting from prototype to production is mostly painful integration work. The pieces exist, but stitching them together reliably is the hard part: https://ift.tt/K2bfkL6 . YC AI Startup School talk about this - https://www.youtube.com/watch?feature=shared&t=1940&v=LCEmiR... We intend to fill those gaps! What you get out of the box: - Authentication / user management - Database - Email integration (3rd party, but things like user verification emails work out of the box) - AI integration - Cron jobs - Monitoring / Telemetry - Configs & secrets - Analytics (coming soon) - File uploads (coming soon) How it runs: A Node.js backend with MongoDB. It's frontend-agnostic, so you can use our minimal Vite + React starter or drop Modelence behind an existing Next.js (or any) frontend. We're also building a managed cloud, similar to what Vercel is for Next.js, except Modelence focuses on the backend instead of the frontend (Vercel is great for content sites like landing pages, blogs, etc, but things like persistent connections and complex backend logic outgrow it quickly). You can find a quick demo here: https://www.youtube.com/watch?v=S4f22FyPpI8 We're looking for early users (especially TS teams on MongoDB). Tell us what's missing, what's confusing, and what you'd want before trusting this in prod. Happy to answer anything!

New top story on Hacker News: Show HN: OWhisper – Ollama for realtime speech-to-text

Show HN: OWhisper – Ollama for realtime speech-to-text
6 by yujonglee | 1 comments on Hacker News.
Hello everyone. This is Yujong from the Hyprnote team ( https://ift.tt/ntM5eVs ). We built OWhisper for 2 reasons: (Also outlined in https://ift.tt/EsYpwQG ) (1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way. (2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints. The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them. For (1) - You can try it now. ( https://ift.tt/zMj43vr ) bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8) We're here and looking forward to your comments!

Sunday, August 10, 2025

New top story on Hacker News: South Korea's military has shrunk by 20% in six years as male population drops

South Korea's military has shrunk by 20% in six years as male population drops
9 by eagleislandsong | 0 comments on Hacker News.


New top story on Hacker News: Show HN: Bolt – A super-fast, statically-typed scripting language written in C

Show HN: Bolt – A super-fast, statically-typed scripting language written in C
20 by beariish | 8 comments on Hacker News.
I've built many interpreters over the years, and Bolt represents my attempt at building the scripting language I always wanted. This is the first public release, 0.1.0! I've felt like most embedded languages have been moving towards safety and typing over years, with things like Python type hints, the explosive popularity of typescript, and even typing in Luau, which powers one of the largest scripted evironments in the world. Bolt attempts to harness this directly in the lagnauge rather than as a preprocessing step, and reap benefits in terms of both safety and performance. I intend to be publishing toys and examples of applications embedding Bolt over the coming few weeks, but be sure to check out the examples and the programming guide in the repo if you're interested!

New top story on Hacker News: Fight Chat Control

Fight Chat Control
60 by tokai | 5 comments on Hacker News.


Friday, August 1, 2025

New top story on Hacker News: I Created a Pop Star Using AI, and She's Dropping an Album in 2065

I Created a Pop Star Using AI, and She's Dropping an Album in 2065
4 by leorapini | 0 comments on Hacker News.


New top story on Hacker News: I couldn't submit a PR, so I got hired and fixed it myself

I couldn't submit a PR, so I got hired and fixed it myself
28 by skeptrune | 9 comments on Hacker News.


New top story on Hacker News: Show HN: TraceRoot – Open-source agentic debugging for distributed services

Show HN: TraceRoot – Open-source agentic debugging for distributed services
10 by xinweihe | 0 comments on Hacker News.
Hey Xinwei and Zecheng here, we are the authors of TraceRoot ( https://ift.tt/L65WPZ7 ). TraceRoot ( https://traceroot.ai ) is an open-source debugging platform that helps engineers fix production issues faster by combining structured traces, logs, source code contexts and discussions in Github PRs, issues and Slack channels, etc. with AI Agents. At the heart are our lightweight Python ( https://ift.tt/iGvl9bf ) and TypeScript ( https://ift.tt/T04t9pS ) SDKs - they can hook into your app using OpenTelemetry and captures logs and traces. These are either sent to a local Jaeger ( https://ift.tt/Nm1e7F8 ) + SQLite backend or to our cloud backend, where we correlate them into a single view. From there, our custom agent takes over. The agent builds a heterogeneous execution tree that merges spans, logs, and GitHub context into one internal structure. This allows it to model the control and data flow of a request across services. It then uses LLMs to reason over this tree - pruning irrelevant branches, surfacing anomalous spans, and identifying likely root causes. You can ask questions like “what caused this timeout?” or “summarize the errors in these 3 spans”, and it can trace the failure back to a specific commit, summarize the chain of events, or even propose a fix via a draft PR. We also built a debugging UI that ties everything together - you explore traces visually, pick spans of interest, and get AI-assisted insights with full context: logs, timings, metadata, and surrounding code. Unlike most tools, TraceRoot stores long-term debugging history and builds structured context for each company - something we haven’t seen many others do in this space. What’s live today: - Python and TypeScript SDKs for structured logs and traces. - AI summaries, GitHub issue generation, and PR creation. - Debugging UI that ties everything together TraceRoot is MIT licensed and easy to self-host (via Docker). We support both local mode (Jaeger + SQLite) and cloud mode. Inspired by OSS projects like PostHog and Supabase - core is free, enterprise features like agent mode multi-tenant and slack integration are paid. If you find it interesting, you can see a demo video here: https://www.youtube.com/watch?v=nb-D3LM0sJM We’d love you to try TraceRoot ( https://traceroot.ai ) and share any feedback. If you're interested, our code is available here: https://ift.tt/L65WPZ7 . If we don’t have something, let us know and we’d be happy to build it for you. We look forward to your comments!