返回首页 Back Home
Women Stack Daily

Women Stack Daily

面向科技从业者的每日技术与行业快报。 A daily briefing on technology and industry updates.
2026-04-05

官方更新 (8)

Official Updates (8)

Workers AI 现已提供 Google Gemma 4 26B A4B

Workers AI - Google Gemma 4 26B A4B now available on Workers AI

Cloudflare 与 Google 合作,将 @cf/google/gemma-4-26b-a4b-it 带到 Workers AI。这个 MoE 模型拥有 26B 总参数、每次前向仅激活 4B,支持长上下文、thinking mode、视觉理解、function calling、多语言与代码任务。

We are partnering with Google to bring @cf/google/gemma-4-26b-a4b-it to Workers AI. Gemma 4 26B A4B is a Mixture-of-Experts (MoE) model built from Gemini 3 research, with 26B total parameters and only 4B active per forward pass. By activating a small subset of parameters during inference, the model runs almost as fast as a 4B-parameter model while delivering the quality of a much larger one. Gemma 4 is Google's most capable family of open models, designed to maximize intelligence-per-parameter. Key capabilities Mixture-of-Experts architecture with 8 active experts out of 128 total (plus 1 shared expert), delivering frontier-level performance at a fraction of the compute cost of dense models 256,000 token context window for retaining full conversation history, tool definitions, and long documents across extended sessions Built-in thinking mode that lets the model reason step-by-step before answering, improving accuracy on complex tasks Vision understanding for object detection, document and PDF parsing, screen and UI understanding, chart comprehension, OCR (including multilingual), and handwriting recognition, with support for variable aspect ratios and resolutions Function calling with native support for structured tool use, enabling agentic workflows and multi-step planning Multilingual with out-of-the-box support for 35+ languages, pre-trained on 140+ languages Coding for code generation, completion, and correction Use Gemma 4 26B A4B through the Workers AI binding (env.AI.run()), the REST API at /run or /v1/chat/completions, or the OpenAI-compatible endpoint. For more information, refer to the Gemma 4 26B A4B model page.

发布日期:2026-04-04 · 来源:Cloudflare
Published: 2026-04-04 · Source: Cloudflare
前端Frontend后端Backend架构Architecture性能PerformanceAIAI

让 diff lines 变快这条路为何如此艰难

The uphill climb of making diff lines performant

GitHub 分享了提升 diff lines 性能过程中的工程取舍,并强调很多性能优化最后还是要回到更简单的设计。

The path to better performance is often found in simplicity. The post The uphill climb of making diff lines performant appeared first on The GitHub Blog.

发布日期:2026-04-03 · 来源:GitHub Blog
Published: 2026-04-03 · Source: GitHub Blog
性能Performance

OpenAI 收购 TBPN

OpenAI acquires TBPN

OpenAI 收购 TBPN,目标是加速围绕 AI 的全球对话,并支持独立媒体,进一步扩大与开发者、企业及更广泛技术社区的连接。

OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.

发布日期:2026-04-02 · 来源:OpenAI
Published: 2026-04-02 · Source: OpenAI
AIAI

AI Gateway:现已支持在上游提供商失败时自动重试

AI Gateway - Automatically retry on upstream provider failures on AI Gateway

AI Gateway 现在支持网关级自动重试。当上游 provider 返回错误时,请求会按你配置的策略自动重发,无需修改客户端逻辑。可配置重试次数、重试间隔以及 Constant、Linear、Exponential 等退避策略。

AI Gateway now supports automatic retries at the gateway level. When an upstream provider returns an error, your gateway retries the request based on the retry policy you configure, without requiring any client-side changes. You can configure the retry count (up to 5 attempts), the delay between retries (from 100ms to 5 seconds), and the backoff strategy (Constant, Linear, or Exponential). These defaults apply to all requests through the gateway, and per-request headers can override them. This is particularly useful when you do not control the client making the request and cannot implement retry logic on the caller side. For more complex failover scenarios — such as failing across different providers — use Dynamic Routing. For more information, refer to Manage gateways.

发布日期:2026-04-02 · 来源:Cloudflare
Published: 2026-04-02 · Source: Cloudflare
AIAI

为什么我们正在为 AI 时代重新思考缓存

Why we're rethinking cache for the AI era

Cloudflare 认为,每周超过 100 亿次的 AI bot 请求正在改变缓存设计的前提。文章讨论了 AI bot 流量与人类流量的差异,以及缓存系统需要如何相应演进。

2026-04-02ResearchCacheThe explosion of AI-bot traffic, representing over 10 billion requests per week, has opened up new challenges and opportunities for cache design. We look at some of the ways AI bot traffic differs from humans, how th…

发布日期:2026-04-02 · 来源:Cloudflare Blog
Published: 2026-04-02 · Source: Cloudflare Blog
设计DesignAIAI

介绍 EmDash:WordPress 的精神续作,主打插件安全

Introducing EmDash — the spiritual successor to WordPress that solves plugin security

Cloudflare 发布 EmDash beta,这是一款基于 Astro 6.0 的全栈无服务器 JavaScript CMS,试图把传统 CMS 的能力与更现代的插件安全模型结合起来。

2026-04-01Today we are launching the beta of EmDash, a full-stack serverless JavaScript CMS built on Astro 6.0. It combines the features of a traditional CMS with modern security, running plugins in sandboxed Worker isolates....Continue re…

发布日期:2026-04-01 · 来源:Cloudflare Blog
Published: 2026-04-01 · Source: Cloudflare Blog
安全Security

在 GitHub 上保护开源供应链安全

Securing the open source supply chain across GitHub

GitHub 回顾了近期以窃取 secrets 为目标的开源攻击,并介绍当前可用的防护措施,以及平台正在推进的安全能力。

Recent attacks on open source focus on exfiltrating secrets; here are the prevention steps you can take today, plus a look at the security capabilities GitHub is working on. The post Securing the open source supply chain across GitHub appeared first on The GitHub Blog.

发布日期:2026-04-01 · 来源:GitHub Blog
Published: 2026-04-01 · Source: GitHub Blog
安全Security开源Open Source

科技新闻 (8)

Tech News (8)

纽约时报关注青少年如何使用角色扮演聊天机器人,以及家长理解其潜在成瘾影响的高风险任务

A look at how some teens use popular role-playing chatbots and, for parents, the high stakes task of understanding the impact of the possibly addictive products (New York Times)

纽约时报报道了部分青少年如何使用流行的角色扮演聊天机器人,以及家长在理解这些可能具有成瘾性的产品时所面临的现实压力。

New York Times: A look at how some teens use popular role-playing chatbots and, for parents, the high stakes task of understanding the impact of the possibly addictive products — When Quentin was 13, he kept seeing ads on YouTube for Talkie, an app with “countless A.I.s eager to speak with you.”

发布日期:2026-04-04 · 来源:Techmeme
Published: 2026-04-04 · Source: Techmeme

Anthropic 的三 agent harness 支持长时运行的全栈 AI 开发

Anthropic’s Designs Three-Agent Harness Supports Long-Running Full-Stack AI Development

Anthropic 介绍了一种把规划、生成和评估拆开的三 agent harness,用于提升长时间运行的前端与全栈 AI 开发工作流稳定性。

Anthropic introduces a three-agent harness separating planning, generation, and evaluation to improve long-running autonomous AI workflows for frontend and full-stack development. Industry commentary highlights structured approaches, iterative evaluation, and practical methods to maintain coherence and quality over multi-hour AI coding sessions. By Leela Kumili

发布日期:2026-04-04 · 来源:InfoQ
Published: 2026-04-04 · Source: InfoQ
前端FrontendAIAIAI AgentAI Agent

印度电影业正在拥抱 AI:制片厂用它压缩制作时间和成本,而好莱坞仍受工会规则限制

How India's film industry is embracing AI, as studios use the tech to cut production time and costs, while union rules constrain its use in Hollywood (Munsif Vengattil/Reuters)

Reuters 报道称,印度电影行业正在用 AI 缩短制作周期、降低成本,并进行多语言配音,而类似做法在好莱坞仍受工会约束。

Munsif Vengattil / Reuters: How India's film industry is embracing AI, as studios use the tech to cut production time and costs, while union rules constrain its use in Hollywood — India's studios are transforming filmmaking by using AI to slash production time, cut costs and dub movies into numerous languages.

发布日期:2026-04-04 · 来源:Techmeme
Published: 2026-04-04 · Source: Techmeme
AIAI

TigerFS:把 PostgreSQL 数据库挂载成开发者和 AI agents 可直接操作的文件系统

TigerFS Mounts PostgreSQL Databases as a Filesystem for Developers and AI Agents

TigerFS 是一个实验性开源文件系统,可把数据库以目录方式挂载,并将文件直接存进 PostgreSQL,使开发者和 AI agents 能用 lscatfindgrep 这样的 Unix 工具直接操作数据。

TigerFS is a new experimental filesystem that mounts a database as a directory and stores files directly in PostgreSQL. The open source project exposes database data through a standard filesystem interface, allowing developers and AI agents to interact with it using common Unix tools such as ls, cat, find, and grep, rather than via APIs or SDKs. By Renato Losio

发布日期:2026-04-04 · 来源:InfoQ
Published: 2026-04-04 · Source: InfoQ
后端BackendAIAIAI AgentAI Agent开源Open Source

报道称 Apple 已为 Apple Silicon Mac 上的 AMD 或 Nvidia eGPU 签署第三方驱动,但用途主要面向 AI 研究而非图形加速

Apple reportedly signed a 3rd-party driver, by Tiny Corp, for AMD or Nvidia eGPUs for Apple Silicon Macs; it's meant for AI research, not accelerating graphics (AppleInsider)

AppleInsider 报道称,Apple 已签署 Tiny Corp 提供的第三方驱动,允许部分 AMD 或 Nvidia eGPU 接入 Apple Silicon Mac,但主要用于 AI 研究。

AppleInsider: Apple reportedly signed a 3rd-party driver, by Tiny Corp, for AMD or Nvidia eGPUs for Apple Silicon Macs; it's meant for AI research, not accelerating graphics — Apple has signed a driver for AMD or Nvidia eGPUs connected to Apple Silicon but there are some big caveats, and it won't improve your graphics.

发布日期:2026-04-04 · 来源:Techmeme
Published: 2026-04-04 · Source: Techmeme
AIAI

Swift 6.3 稳定 Android SDK,并进一步扩展 C 互操作能力

Swift 6.3 Stabilizes Android SDK, Extends C Interop, and More

Swift 6.3 继续推进跨平台能力,带来官方 Android 支持、通过新 @c 属性显著增强 C 互操作,并继续扩展嵌入式开发支持。

Swift 6.3 advances Swift cross-platform story with official Android support, improves significantly C interoperability through the new @c attribute, and continues extending embedded programming support. It also strengthens the ecosystem with a unified build system direction and gives developers more low-level performance control. By Sergio De Simone

发布日期:2026-04-03 · 来源:InfoQ
Published: 2026-04-03 · Source: InfoQ
DevOpsDevOps性能PerformanceDXDX移动移动

对 1,372 名参与者和 9,000 多次实验的研究显示,“认知投降”现象普遍存在,很多人对 AI 推理缺乏足够怀疑

Research across 1,372 participants and 9K+ trials details "cognitive surrender", where most subjects had minimal AI skepticism and accepted faulty AI reasoning (Kyle Orland/Ars Technica)

Ars Technica 报道称,一项涵盖 1,372 名参与者与 9,000 多次试验的研究指出,很多人在面对 AI 推理时怀疑不足,容易接受错误结论。

Kyle Orland / Ars Technica: Research across 1,372 participants and 9K+ trials details “cognitive surrender”, where most subjects had minimal AI skepticism and accepted faulty AI reasoning — When it comes to large language model-powered tools, there are generally two broad categories of users.

发布日期:2026-04-04 · 来源:Techmeme
Published: 2026-04-04 · Source: Techmeme
AIAI

开源安全工具 Trivy 遭遇供应链攻击,引发行业紧急响应

Open Source Security Tool Trivy Hit by Supply Chain Attack, Prompting Urgent Industry Response

广泛使用的开源漏洞扫描器 Trivy 短暂向用户分发了恶意版本,这次事件再次暴露了软件供应链安全中的薄弱点。

A major security incident affecting the widely used open source vulnerability scanner Trivy has exposed critical weaknesses in software supply chain security, after maintainers confirmed that a malicious release was briefly distributed to users. By Craig Risi

发布日期:2026-04-03 · 来源:InfoQ
Published: 2026-04-03 · Source: InfoQ
安全Security开源Open Source

技术阅读 (8)

Technical Reads (8)

我们如何免费把一个由多种工具组成的静态站点部署到 Cloudflare Pages

How We Deployed a Static Site with Many Tools to Cloudflare Pages for Free

作者介绍了一个免费浏览器工具站的部署过程,涵盖从多种工具组合到最终落地在 Cloudflare Pages 的实践。

Small Helper Tools is a free, browser-based utility site — wheel spinners, dice rollers, finance calculators, text utilities, developer…Continue reading on Medium »

发布日期:2026-04-05 · 来源:Medium Web Development
Published: 2026-04-05 · Source: Medium Web Development
随笔随笔

Figma Auto Layout:一份会彻底改变你设计方式的完整指南

Figma Auto Layout: The Complete Guide That Will Change How You Design Forever

文章强调,很多设计师每周都会浪费大量时间在手动调整间距上,而 Auto Layout 能系统性地解决这个问题。

Most designers waste 3+ hours every week on manual spacing adjustments. Here’s how to stop.Continue reading on Medium »

发布日期:2026-04-04 · 来源:Medium UI Design
Published: 2026-04-04 · Source: Medium UI Design
设计Design随笔随笔

Java 学习第 55 天:Java GUI 入门

Day 55 of Learning Java: Introduction to GUI in Java

作者从此前的控制台程序过渡到 GUI,介绍 Java 中图形界面开发的基础概念。

So far, most of the Java programs I have created were console-based, where input and output happen through the terminal.Continue reading on Medium »

发布日期:2026-04-05 · 来源:Medium Programming
Published: 2026-04-05 · Source: Medium Programming
随笔随笔

Cache Invalidation 很难,你的前端里其实有四种

Cache Invalidation Is Hard. Your Frontend Has Four of Them

文章深入讨论前端缓存、cache busting,以及为什么“只是部署一下”在真实系统里远远没有那么简单。

A deep dive into frontend caching, cache busting, and why “just deploy” is never really just a deploy.Continue reading on Medium »

发布日期:2026-04-04 · 来源:Medium Frontend
Published: 2026-04-04 · Source: Medium Frontend
前端Frontend随笔随笔

架构是 AI Harness Engineering 缺失的一层

Architecture Is the Missing Layer in AI Harness Engineering

文章认为,很多 AI harness 工作聚焦在执行层,例如上下文管理、工具访问、验证和子 agent 协同,但真正缺失的是“把架构约束也纳入 harness”。作者提出 pattern registry、deterministic architecture compiler 和 workflow rules,来把团队架构约束变成可执行的控制边界。

Originally published in longer form on Substack. This DEV version is adapted for software engineers and platform practitioners who want the practical takeaway quickly. Most AI harness work focuses on execution. That makes sense. Teams need better context management, tool access, workflow boundaries, verification, memory, and sub-agent coordination. Without those pieces, coding agents are unreliable fast. But there is a different failure mode that those harness improvements do not solve: an agent can operate inside a well-designed execution harness and still produce the wrong architecture. That is the missing layer. The Real Problem Is Not Just Code Quality Ask an agent to design a small SaaS product and it will often produce something that is technically coherent and operationally excessive at the same time. You get things like: microservices where a monolith would do Kubernetes where managed PaaS is the obvious fit heavyweight observability and rollout machinery for a team with no real platform capacity provider choices that quietly add lock-in or operational burden reliability mechanisms sized for a much larger organization None of that is necessarily irrational. It is just architecture optimized for an imaginary team. That is what happens when the harness governs what the agent can see and do, but not what kinds of systems it is allowed to design. What the Harness Usually Misses Most organizations already have architectural constraints, whether they write them down well or not: cost ceilings preferred cloud/saas providers approved deployment models auth and identity boundaries operational limits compliance expectations explicit exclusions The problem is that these often live in: docs ADRs wiki pages tribal memory architecture review meetings That is not enough for agent-driven workflows. If those constraints are not machine-readable and enforceable, the agent is still reasoning inside an underconstrained design space. What I Mean by "Architecture Inside the Harness" The core idea is simple: The harness should not only manage execution. It should also constrain architecture. In practice, that means three pieces: 1. A pattern registry Architectural knowledge has to live somewhere reusable. A pattern in the registry can encode: what constraints it supports what NFR thresholds it can satisfy what it provides and requires what config decisions it exposes what cost and adoption trade-offs it carries That turns architecture knowledge from conversation into versioned policy. 2. A deterministic architecture compiler The compiler takes a canonical spec and selects patterns based on explicit rules. The key property is determinism. Given the same inputs, it should produce the same outputs. That gives teams something they can actually review and approve. It also makes architectural change visible as a diff instead of as implementation drift discovered too late. 3. Workflow rules around the compiler The compiler alone is not enough. You also need workflow discipline that tells the agent: when to compile when planning has surfaced a real architecture change when re-approval is required when implementation is allowed to proceed That is what turns architecture from documentation into a control point. Why Determinism Matters At the architecture layer, the problem is not mainly creativity. It is governance. That is why deterministic behavior matters more than people often expect. It gives you: reproducibility auditability explicit assumptions explicit exclusions a recompile-and-diff path when constraints change For senior engineers and platform teams, that is much more useful than a model producing a plausible design summary in slightly different words each time. A Concrete Example I used this approach in a Bird ID application workflow. The product itself was simple: users upload bird photos, an AI model identifies likely species, and results are stored in per-user history. The important part was not the feature list. It was the operating context: hosted PaaS backend managed Postgres OIDC for auth object storage for uploads low traffic strong cost sensitivity no real ops team Once those became compiler inputs, the architecture was constrained mechanically rather than conversationally. That made it much easier to reject patterns that would have been technically valid but wrong for the project: heavyweight deployment patterns overly complex topology choices infrastructure layers that added operational cost without real payoff The downstream effect mattered too. The approved architecture could then be handed to planning and implementation as an explicit contract instead of a loose design memo. The Real Deliverable Is Not Better Documentation The main output of this style of harnessing is not prettier architecture docs. The real output is an enforceable boundary between architecture and implementation. That boundary matters because implementation agents are good at creating drift quickly. If the architecture says: OAuth2/OIDC with PKCE hosted PaaS managed Postgres monolithic service topology then implementation should not quietly reintroduce: server-side session state new provider choices new persistence layers unnecessary distributed complexity Without a hard boundary, those changes show up as "implementation details." In practice, they are architecture changes. What Platform Teams Should Take From This If you are building internal agent workflows, the practical lesson is: do not stop at context engineering. Context engineering improves what the agent can see. Tool engineering improves what the agent can do. But neither is enough to keep the system architecture aligned with actual team constraints. Platform teams need something stronger: explicit architecture inputs deterministic architecture selection approval and re-approval boundaries implementation workflows that are forced to stay inside the contract That is what architecture inside the harness gives you. Closing The value of a harness is not only that it makes agents more capable. The value is that it bounds the solution space so capability is applied in the right direction. If the architecture layer stays implicit, fast agents will simply accelerate architectural drift. If the architecture layer becomes explicit, reviewable, and enforceable, then agent speed becomes much easier to trust. That is the argument: architecture is the missing layer in AI harness engineering. Links Longer Substack version: https://inetgas.substack.com/p/ai-harness-engineering-at-the-architecture Architecture Compiler: https://github.com/inetgas/arch-compiler Bird ID case study: https://github.com/inetgas/arch-compiler-ai-harness-in-action

发布日期:2026-04-04 · 来源:DEV Community
Published: 2026-04-04 · Source: DEV Community
后端BackendDevOpsDevOps架构ArchitectureDXDX设计Design

当 AI 把事实与假设混在一起

When AI Collapses Fact and Assumption

作者指出,LLM 常把基于代码可验证的结论和对外围系统的猜测混写在同一段流畅表述里,导致评审者必须自己拆分“文件里真的支持了什么”和“模型只是合理猜了什么”。文章用 VDG protocol 作为一种让模型显式暴露未知与假设的方式。

Blended inference is the baseline response mode of LLMs. Smooth prose is the goal. In software, that smoothness can hide the boundary between grounded analysis and inferred assumptions. The generation process does not distinguish between a token the model can support and one it filled in. Everything comes out at the same confidence level. I ran a small experiment on a Python caching service by asking: We’re seeing latency spikes on our report generation API. What should we look at? The baseline response correctly identified concrete areas to improve in the file: no lock or request coalescing on cache miss, a cleanup job that scans all of Redis, a stale flag that never gets checked, and synchronized TTL expiry. In the same answer, at the same confidence level, it also said things like: “If this runs periodically on the same Redis used by the API, it is a strong candidate for periodic spikes.” “If many hot reports are created around the same time—after deploy, after nightly prefetch, after business-hour traffic ramps up—they can expire around the same time too.” “Correlate p95/p99 latency with cache hit rate for /reports/generate.” None of those lines are absurd. Some may even be useful. The model did not know my Redis topology. It did not know my traffic shape. It did not know whether I had that telemetry. It did not verify the correlation it recommended. It moved from what it could support from the file to assumptions about the surrounding system and wrote both in the same voice. Instead, the burden of sifting grounded analysis out of a flood of smooth prose falls on me. That changed what review required from me. I could not just ask whether a sentence was wrong. I had to decompose the answer: what came directly from the file, what followed from reasoning over the file, and what entered because the model filled in missing context. I then reran the same prompt and the same code using VDG protocol. The concrete analysis stayed. But the response could no longer glide past what it did not know. Instead of silently leaning on unknowns, the response had to put those unknowns in the Gap section: “No request metrics were provided, so it is unknown whether spikes are dominated by aggregate_transactions runtime, Redis latency, or concurrent duplicate work.” “No Redis topology was provided. It is unknown whether this cache is dedicated or shared, how many total keys live in db=0, and whether Redis CPU or memory pressure is present.” “No traffic-shape data was provided. It is unknown whether a small set of hot report keys dominates traffic or whether demand is evenly distributed.” “No client retry behavior was provided. It is unknown whether callers retry generate aggressively on slow responses, which would magnify stampedes.” I could see what the file supported, what the model inferred, and what remained open. That is the value of VDG protocol. Not just a consistent response shape, but a way to force the model to take on the burden of separating grounded analysis from inferred assumptions.

发布日期:2026-04-05 · 来源:DEV Community
Published: 2026-04-05 · Source: DEV Community
后端Backend性能PerformanceAIAI

社区信号 (10)

Community Signals (10)

Gemma 4 31B 在 FoodTruck Bench 上击败多款 frontier 模型

Gemma 4 31B beats several frontier models on the FoodTruck Bench

发帖者指出,Gemma 4 31B 在 FoodTruck Bench 中拿到第三名,超过了 GLM 5、Qwen 3.5 397B 和多个 Claude Sonnet 版本。

Gemma 4 31B takes an incredible 3rd place on FoodTruck Bench, beating GLM 5, Qwen 3.5 397B and all Claude Sonnets! I'm looking forward to how they'll explain the result. Based on the previous models that failed to finish the run, it would…

发布日期:2026-04-04 · 来源:Reddit
Published: 2026-04-04 · Source: Reddit

[D] 做了 10 年以上 ML 的人看下来,公众最误解的是什么?

[D] Those of you with 10+ years in ML — what is the public completely wrong about?

发帖者想了解,在长期从事 ML/AI 研究与应用的人眼里,公众对 AI 前沿真实进展究竟高估了什么、又低估了什么。

For those of you who've been in ML/AI research or applied ML for 10+ years — what's the gap between what the public thinks AI is doing vs. what's actually happening at the frontier? What are we collectively underestimating or overestimatin…

发布日期:2026-04-04 · 来源:Reddit
Published: 2026-04-04 · Source: Reddit
AIAI

[D] KDD 评审讨论串

[D] KDD Review Discussion

KDD 2026 二月周期评审发布当天,社区开启讨论串交流 review 结果,也提醒大家评审系统本身存在噪声。

KDD 2026 (Feb Cycle) reviews will release today (4-April AoE), This thread is open to discuss about reviews and importantly celebrate successful reviews. Let us all remember that review system is noisy and we all suffer from it and this do…

发布日期:2026-04-04 · 来源:Reddit
Published: 2026-04-04 · Source: Reddit

有人会因为 AI 的建议而不敢偏离它吗?

People anxious about deviating from what AI tells them to do?

发帖者以朋友染发时完全照着 ChatGPT 建议做为例,讨论人们是否会对 AI 建议产生过度依赖,甚至不敢违背它。

My friend came over yesterday to dye her hair. She had asked ChatGPT for the 'correct' way to do it. Chat told her to dye the ends first, wait about 20 minutes, and then do the roots. Because of my own experience with dyeing my hair, that…

发布日期:2026-04-04 · 来源:Reddit
Published: 2026-04-04 · Source: Reddit
AIAI

我用 Python 做了一个尼泊尔历法计算引擎,结果发现它根本没有统一公式

Built a Nepali calendar computation engine in Python, turns out there's no formula for it

Project Parva 是一个 REST API,用来计算 Bikram Sambat 日期、节日安排、panchanga、muhurta 以及吠陀命盘等内容;作者发现,这套历法并不能靠单一公式推导,而需要基于资料和规则组合计算。

**What My Project Does** Project Parva is a REST API that computes Bikram Sambat (Nepal's official calendar) dates, festival schedules, panchanga (lunar almanac), muhurta (auspicious time windows), and Vedic birth charts. It derives everyt…

发布日期:2026-04-04 · 来源:Reddit
Published: 2026-04-04 · Source: Reddit
后端Backend

作品展示串

Showcase Thread

社区月度 showcase 贴,集中收集代码、项目、作品展示以及各类 AI 产物。

Post all of your code/projects/showcases/AI slop here. Recycles once a month.

发布日期:2026-04-04 · 来源:Reddit
Published: 2026-04-04 · Source: Reddit
AIAI