Tags: algorithm*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This paper introduces a new class of "unbounded" spigot algorithms for calculating the decimal digits of $pi$, improving upon the classic Rabinowitz–Wagon method. While previous spigot algorithms required users to commit to a specific number of digits in advance and faced potential errors due to carry-over effects from truncated series, this proposed approach eliminates those limitations by allowing for infinite digit generation given sufficient memory. Although not intended to compete with high-performance state-of-the-art arithmetic-geometric mean algorithms, the author’s method offers a mathematically robust, simple, and incrementally efficient way to produce digits one by one without prior commitment or risk of truncation errors.
    2026-04-14 Tags: , , , by klotz
  2. NEXUS is a production-grade, full-text and semantic search engine built from scratch, implementing advanced data structures and distributed systems concepts. It focuses on probabilistic optimization, sub-millisecond latency, and hybrid AI-powered search. The project demonstrates core technologies like LSM Trees, Bloom Filters, HNSW Graphs, and W-TinyLFU caches, integrated into a high-performance pipeline. It also includes a LeetCode algorithm library with implementations of classic interview patterns and provides insights into distributed crawling and persistent storage.
  3. An exploration of Claude 3 Opus's coding capabilities, specifically its ability to generate a functional CLI tool for the Minimax algorithm with a single prompt. The article details the prompt used, the generated code, and the successful execution of the tool, highlighting Claude's impressive one-shot learning and code generation abilities.
  4. This article explains the Greedy Boruta algorithm, a faster alternative to the traditional Boruta algorithm for feature selection. It details how it works, its advantages, and provides a Python implementation.
  5. A recap of the author's Boggle project, including media coverage, a published paper on arXiv.org, new optimizations, and reflections on the challenges and future directions.
  6. This article details how to build a lightweight and efficient rules engine by recasting propositional logic as sparse algebra. It guides readers through the process from theoretical foundations to practical implementation, introducing concepts like state vectors and algebraic operations for logical inference.
  7. This article by Zelda B. Zabinsky provides an overview of random search algorithms, which are particularly useful for tackling complex global optimization problems with either continuous or discrete variables. These algorithms, including simulated annealing, genetic algorithms, and particle swarm optimization, leverage randomness or probability in their iterative processes, often falling under the category of metaheuristics. Such methods are valuable for problems characterized by nonconvex, nondifferentiable, or discontinuous objective functions, as they offer a trade-off between optimality and computational speed. Random search algorithms can be categorized by their approach to exploration versus exploitation, and their application spans various fields, including engineering, scheduling, and biological systems. They address challenges where traditional deterministic methods struggle, particularly in the absence of clear structures distinguishing local from global optima.
  8. A young computer scientist at Rutgers University, along with his former professor and a colleague from Carnegie Mellon University, has disproved a 40-year-old conjecture in data science related to hash tables, showing that a new type can achieve faster search times than previously thought possible.

    Andrew Krapivin, an undergraduate student at Rutgers University, along with his colleagues Martín Farach-Colton and William Kuszmaul, has challenged a 40-year-old conjecture by demonstrating that a new type of hash table can perform searches and insertions faster than previously thought possible. Their invention, inspired by "tiny pointers," contradicts a long-standing assumption that the time complexity for these operations would be proportional to the table's fullness (measured as x). Instead, they found that the time complexity for their new hash table is proportional to (log x)^2, which is significantly faster. Additionally, their study also refutes another conjecture regarding the average time taken for queries in non-greedy hash tables, showing that a constant average query time is achievable regardless of the table's fullness.
  9. A method called location arithmetic, first described by John Napier in 1617, uses a checkerboard to perform various mathematical calculations, including multiplication, division, and taking the square root, by breaking numbers into their binary equivalents and moving markers around the board.
    2025-01-04 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "algorithm"

About - Propulsed by SemanticScuttle