Claude Code, Codex, Gemini, Blackbox.
All Agents in One.
Blackbox unifies every major AI agent into one powerful interface. One subscription. One workflow. Unlimited possibilities. Join +30M builders on BLACKBOX.
BLACKBOX unifies every major AI agent into one powerful interface. One subscription. One workflow. Unlimited possibilities. Join +30M builders on BLACKBOX.
Teams at fortune 500 companies that
depend on BlackBox.AI

BlackBox
signal-limits-service
service
RateLimiter.java
RateLimitConfig.java
RateLimitMetrics.java
delivery
MessageDeliveryLoop.java
NoopDeliveryLoop.java
RedisMessageDelivery.java
challenges
ChallengeManager.java
ChallengeOption.java
RateLimitMetrics.java
RateLimiter.java
ChallengeOption.java
package service.limits;
import java.time.Instant;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
public class RateLimitMetrics {
allowedCount, blockedCount, resetCount
markReset(),getResetCount()
private final AtomicLong allowed = new AtomicLong();
private final AtomicLong blocked = new AtomicLong();
private final AtomicReference lastAllowed = new AtomicReference<>();
private final AtomicReference lastBlocked = new AtomicReference<>();
private final ConcurrentLinkedQueue samples = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService cleaner Executors.newSingleThreadScheduledExecutor();
public RateLimitMetrics() {
cleaner.scheduleAtFixedRate(this::trim, 1, 1, TimeUnit.SECONDS);
}
public void markAllowed() {
allowed.incrementAndGet();
lastAllowed.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public void markBlocked() {
blocked.incrementAndGet();
lastBlocked.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public long getAllowed() {
return allowed.get();
}
public long getBlocked() {
return blocked.get();
}
// inserted: approximate requests per second over last 60 seconds
public double requestsPerSecond() {
long now = System.currentTimeMillis();
long count = samples.stream().filter(t -> now - t <= 60_000).count();
return count / 60.0;
}
private void trim() {
long now = System.currentTimeMillis();
while (true) {
Long head = samples.peek();
if (head == null) break;
if (now - head > 60_000) samples.poll();
else break;
}
}
@Override
public String toString() {
return "allowed=" + allowed.get()
+ ", blocked=" + blocked.get()
+ ", rps=" + requestsPerSecond();
}
}
-
-
+
+
+
+
+
+
Task
Improve the rate limiting implementation.
Thought for 2 minutes.
I'll proceed with implementing all the improvements in a logical order starting with Enhanced Metrics and Observability, then Performance Optimizations, and finally Advanced Rate Limiting Features. Let me create a TODO.md file to track our progress:
TODO.md
BlackBox.AI is thinking...
Now let's start with Phase 1: Enhanced Metrics and
Observability. First, I’ll create a comprehensive metrics class for rate limiting:
RateLimitMetrics.java
Cancel
Type a message (@ to add files)...
Auto
BLACKBOX AI
signal-server
service
RateLimiter.java
RateLimitConfig.java
RateLimitMetrics.java
delivery
MessageDeliveryLoop.java
NoopDeliveryLoop.java
RedisMessageDelivery.java
challenges
ChallengeManager.java
ChallengeOption.java
RateLimitMetrics.java
RateLimiter.java
ChallengeOption.java
package service.limits;
import java.time.Instant;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
public class RateLimitMetrics {
allowedCount, blockedCount, resetCount
markReset(), getResetCount()
private final AtomicLong allowed = new AtomicLong();
private final AtomicLong blocked = new AtomicLong();
private final AtomicReference lastAllowed = new AtomicReference<>();
private final AtomicReference lastBlocked = new AtomicReference<>();
private final ConcurrentLinkedQueue samples = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService cleaner Executors.newSingleThreadScheduledExecutor();
public RateLimitMetrics() {
cleaner.scheduleAtFixedRate(this::trim, 1, 1, TimeUnit.SECONDS);
}
public void markAllowed() {
allowed.incrementAndGet();
lastAllowed.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public void markBlocked() {
blocked.incrementAndGet();
lastBlocked.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public long getAllowed() {
return allowed.get();
}
public long getBlocked() {
return blocked.get();
}
// inserted: approximate requests per second over last 60 seconds
public double requestsPerSecond() {
long now = System.currentTimeMillis();
long count = samples.stream().filter(t -> now - t <= 60_000).count();
return count / 60.0;
}
private void trim() {
long now = System.currentTimeMillis();
while (true) {
Long head = samples.peek();
if (head == null) break;
if (now - head > 60_000) samples.poll();
else break;
}
}
@Override
public String toString() {
return "allowed=" + allowed.get()
+ ", blocked=" + blocked.get()
+ ", rps=" + requestsPerSecond();
}
}
-
-
+
+
+
+
+
+
Task
Improve the rate limiting implementation.
Thought for 2 minutes.
I'll proceed with implementing all the improvements in a logical order starting with Enhanced Metrics and Observability, then Performance Optimizations, and finally Advanced Rate Limiting Features. Let me create a TODO.md file to track our progress:
TODO.md
BLACKBOX AI thinking...
Now let's start with Phase 1: Enhanced Metrics and
Observability. First, I’ll create a comprehensive metrics class for rate limiting:
RateLimitMetrics.java
Cancel
Type a message (@ to add files)...
Auto

BlackBox
signal-limits-service
service
RateLimiter.java
RateLimitConfig.java
RateLimitMetrics.java
delivery
MessageDeliveryLoop.java
NoopDeliveryLoop.java
RedisMessageDelivery.java
challenges
ChallengeManager.java
ChallengeOption.java
RateLimitMetrics.java
RateLimiter.java
ChallengeOption.java
package service.limits;
import java.time.Instant;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
public class RateLimitMetrics {
allowedCount, blockedCount, resetCount
markReset(),getResetCount()
private final AtomicLong allowed = new AtomicLong();
private final AtomicLong blocked = new AtomicLong();
private final AtomicReference lastAllowed = new AtomicReference<>();
private final AtomicReference lastBlocked = new AtomicReference<>();
private final ConcurrentLinkedQueue samples = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService cleaner Executors.newSingleThreadScheduledExecutor();
public RateLimitMetrics() {
cleaner.scheduleAtFixedRate(this::trim, 1, 1, TimeUnit.SECONDS);
}
public void markAllowed() {
allowed.incrementAndGet();
lastAllowed.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public void markBlocked() {
blocked.incrementAndGet();
lastBlocked.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public long getAllowed() {
return allowed.get();
}
public long getBlocked() {
return blocked.get();
}
// inserted: approximate requests per second over last 60 seconds
public double requestsPerSecond() {
long now = System.currentTimeMillis();
long count = samples.stream().filter(t -> now - t <= 60_000).count();
return count / 60.0;
}
private void trim() {
long now = System.currentTimeMillis();
while (true) {
Long head = samples.peek();
if (head == null) break;
if (now - head > 60_000) samples.poll();
else break;
}
}
@Override
public String toString() {
return "allowed=" + allowed.get()
+ ", blocked=" + blocked.get()
+ ", rps=" + requestsPerSecond();
}
}
-
-
+
+
+
+
+
+
Task
Improve the rate limiting implementation.
Thought for 2 minutes.
I'll proceed with implementing all the improvements in a logical order starting with Enhanced Metrics and Observability, then Performance Optimizations, and finally Advanced Rate Limiting Features. Let me create a TODO.md file to track our progress:
TODO.md
BlackBox.AI is thinking...
Now let's start with Phase 1: Enhanced Metrics and
Observability. First, I’ll create a comprehensive metrics class for rate limiting:
RateLimitMetrics.java
Cancel
Type a message (@ to add files)...
Auto
BLACKBOX AI
signal-server
service
RateLimiter.java
RateLimitConfig.java
RateLimitMetrics.java
delivery
MessageDeliveryLoop.java
NoopDeliveryLoop.java
RedisMessageDelivery.java
challenges
ChallengeManager.java
ChallengeOption.java
RateLimitMetrics.java
RateLimiter.java
ChallengeOption.java
package service.limits;
import java.time.Instant;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
public class RateLimitMetrics {
allowedCount, blockedCount, resetCount
markReset(), getResetCount()
private final AtomicLong allowed = new AtomicLong();
private final AtomicLong blocked = new AtomicLong();
private final AtomicReference lastAllowed = new AtomicReference<>();
private final AtomicReference lastBlocked = new AtomicReference<>();
private final ConcurrentLinkedQueue samples = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService cleaner Executors.newSingleThreadScheduledExecutor();
public RateLimitMetrics() {
cleaner.scheduleAtFixedRate(this::trim, 1, 1, TimeUnit.SECONDS);
}
public void markAllowed() {
allowed.incrementAndGet();
lastAllowed.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public void markBlocked() {
blocked.incrementAndGet();
lastBlocked.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public long getAllowed() {
return allowed.get();
}
public long getBlocked() {
return blocked.get();
}
// inserted: approximate requests per second over last 60 seconds
public double requestsPerSecond() {
long now = System.currentTimeMillis();
long count = samples.stream().filter(t -> now - t <= 60_000).count();
return count / 60.0;
}
private void trim() {
long now = System.currentTimeMillis();
while (true) {
Long head = samples.peek();
if (head == null) break;
if (now - head > 60_000) samples.poll();
else break;
}
}
@Override
public String toString() {
return "allowed=" + allowed.get()
+ ", blocked=" + blocked.get()
+ ", rps=" + requestsPerSecond();
}
}
-
-
+
+
+
+
+
+
Task
Improve the rate limiting implementation.
Thought for 2 minutes.
I'll proceed with implementing all the improvements in a logical order starting with Enhanced Metrics and Observability, then Performance Optimizations, and finally Advanced Rate Limiting Features. Let me create a TODO.md file to track our progress:
TODO.md
BLACKBOX AI thinking...
Now let's start with Phase 1: Enhanced Metrics and
Observability. First, I’ll create a comprehensive metrics class for rate limiting:
RateLimitMetrics.java
Cancel
Type a message (@ to add files)...
Auto

BlackBox
signal-limits-service
service
RateLimiter.java
RateLimitConfig.java
RateLimitMetrics.java
delivery
MessageDeliveryLoop.java
NoopDeliveryLoop.java
RedisMessageDelivery.java
challenges
ChallengeManager.java
ChallengeOption.java
RateLimitMetrics.java
RateLimiter.java
ChallengeOption.java
package service.limits;
import java.time.Instant;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
public class RateLimitMetrics {
allowedCount, blockedCount, resetCount
markReset(),getResetCount()
private final AtomicLong allowed = new AtomicLong();
private final AtomicLong blocked = new AtomicLong();
private final AtomicReference lastAllowed = new AtomicReference<>();
private final AtomicReference lastBlocked = new AtomicReference<>();
private final ConcurrentLinkedQueue samples = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService cleaner Executors.newSingleThreadScheduledExecutor();
public RateLimitMetrics() {
cleaner.scheduleAtFixedRate(this::trim, 1, 1, TimeUnit.SECONDS);
}
public void markAllowed() {
allowed.incrementAndGet();
lastAllowed.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public void markBlocked() {
blocked.incrementAndGet();
lastBlocked.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public long getAllowed() {
return allowed.get();
}
public long getBlocked() {
return blocked.get();
}
// inserted: approximate requests per second over last 60 seconds
public double requestsPerSecond() {
long now = System.currentTimeMillis();
long count = samples.stream().filter(t -> now - t <= 60_000).count();
return count / 60.0;
}
private void trim() {
long now = System.currentTimeMillis();
while (true) {
Long head = samples.peek();
if (head == null) break;
if (now - head > 60_000) samples.poll();
else break;
}
}
@Override
public String toString() {
return "allowed=" + allowed.get()
+ ", blocked=" + blocked.get()
+ ", rps=" + requestsPerSecond();
}
}
-
-
+
+
+
+
+
+
Task
Improve the rate limiting implementation.
Thought for 2 minutes.
I'll proceed with implementing all the improvements in a logical order starting with Enhanced Metrics and Observability, then Performance Optimizations, and finally Advanced Rate Limiting Features. Let me create a TODO.md file to track our progress:
TODO.md
BlackBox.AI is thinking...
Now let's start with Phase 1: Enhanced Metrics and
Observability. First, I’ll create a comprehensive metrics class for rate limiting:
RateLimitMetrics.java
Cancel
Type a message (@ to add files)...
Auto
BLACKBOX AI
signal-server
service
RateLimiter.java
RateLimitConfig.java
RateLimitMetrics.java
delivery
MessageDeliveryLoop.java
NoopDeliveryLoop.java
RedisMessageDelivery.java
challenges
ChallengeManager.java
ChallengeOption.java
RateLimitMetrics.java
RateLimiter.java
ChallengeOption.java
package service.limits;
import java.time.Instant;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;
public class RateLimitMetrics {
allowedCount, blockedCount, resetCount
markReset(), getResetCount()
private final AtomicLong allowed = new AtomicLong();
private final AtomicLong blocked = new AtomicLong();
private final AtomicReference lastAllowed = new AtomicReference<>();
private final AtomicReference lastBlocked = new AtomicReference<>();
private final ConcurrentLinkedQueue samples = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService cleaner Executors.newSingleThreadScheduledExecutor();
public RateLimitMetrics() {
cleaner.scheduleAtFixedRate(this::trim, 1, 1, TimeUnit.SECONDS);
}
public void markAllowed() {
allowed.incrementAndGet();
lastAllowed.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public void markBlocked() {
blocked.incrementAndGet();
lastBlocked.set(Instant.now());
samples.add(System.currentTimeMillis());
}
public long getAllowed() {
return allowed.get();
}
public long getBlocked() {
return blocked.get();
}
// inserted: approximate requests per second over last 60 seconds
public double requestsPerSecond() {
long now = System.currentTimeMillis();
long count = samples.stream().filter(t -> now - t <= 60_000).count();
return count / 60.0;
}
private void trim() {
long now = System.currentTimeMillis();
while (true) {
Long head = samples.peek();
if (head == null) break;
if (now - head > 60_000) samples.poll();
else break;
}
}
@Override
public String toString() {
return "allowed=" + allowed.get()
+ ", blocked=" + blocked.get()
+ ", rps=" + requestsPerSecond();
}
}
-
-
+
+
+
+
+
+
Task
Improve the rate limiting implementation.
Thought for 2 minutes.
I'll proceed with implementing all the improvements in a logical order starting with Enhanced Metrics and Observability, then Performance Optimizations, and finally Advanced Rate Limiting Features. Let me create a TODO.md file to track our progress:
TODO.md
BLACKBOX AI thinking...
Now let's start with Phase 1: Enhanced Metrics and
Observability. First, I’ll create a comprehensive metrics class for rate limiting:
RateLimitMetrics.java
Cancel
Type a message (@ to add files)...
Auto
State of the art performance
BLACKBOX Autonomous Agent
Run multiple autonomous agents in parallel locally
and remotely to execute your tasks securely.
Multiple layers of protection designed to meet the strictest security and compliance requirements.
Multi-Agent Remote Execution
Multi-agent remote execution with a chairman LLM to select the best execution for each task.
Multi-Agent Execution
BLACKBOX dispatches the same task to multiple agents at once: BLACKBOX, Claude Code, Codex, Gemini.
Chairman LLM
Every task is implemented differently by each agent, and the chairman LLM selects the best implementation for users.
Long running tasks
Support for long-running tasks that require extended hours of execution in isolated sandbox environments where agents can install, implement, and test.
Monitor agents concurrently
One view to monitor all agents' implementations for individuals and teams collaborating on different projects.
BLACKBOX Agents on +35 IDEs
BLACKBOX Agents run tasks on the BLACKBOX IDE, VSCode, Jetbrains & more…
Large code base context
BLACKBOX coding agent is optimized for both small and very large production-grade codebases that require extensive context.
Controllable Autonomy
Control the level of autonomy you want to grant the BLACKBOX coding agent while it executes your tasks.
Plan, Execute, Test
BLACKBOX agents are designed to implement tasks at a production level with detailed planning, high-precision execution, and a thorough testing phase.
Browser Agent
The BLACKBOX coding agent includes many built-in tools, including the browser agent, which provides autonomy in testing and iterating on its implementation.
BLACKBOX CLI
AI-powered development, right from your terminal.
AI-Powered Code Generation
Generate APIs, scripts, components, and features directly from your terminal using simple natural-language prompts.
Smart Debugging & Fixes
Identify bugs, understand errors, and apply AI-generated fixes automatically without leaving the CLI.
End-to-End Project Automation
Set up projects, generate tests, configure CI/CD, and manage deployments — all from one CLI.
Works Across Any Stack
Use the CLI with Python, JavaScript, React, Node.js, databases, and more — no framework lock-in.
Frequently Asked
Questions
Looking for something in particular? Don't hesitate to reach out.
Can I try BLACKBOX AI Enterprise before committing?
We offer generous credits for teams at large enterprises to experiment with the BLACKBOX Agent before committing.
Can BLACKBOX AI be deployed on-premise?
What integrations are available?
Do you offer volume discounts for large teams?
What support response times can we expect?
How does BLACKBOX AI ensure code privacy?
Can I try BLACKBOX AI Enterprise before committing?
We offer generous credits for teams at large enterprises to experiment with the BLACKBOX Agent before committing.
Can BLACKBOX AI be deployed on-premise?
What integrations are available?
Do you offer volume discounts for large teams?
What support response times can we expect?
How does BLACKBOX AI ensure code privacy?
Can I try BLACKBOX AI Enterprise before committing?
We offer generous credits for teams at large enterprises to experiment with the BLACKBOX Agent before committing.
Can BLACKBOX AI be deployed on-premise?
What integrations are available?
Do you offer volume discounts for large teams?
What support response times can we expect?
How does BLACKBOX AI ensure code privacy?
Start with BLACKBOX
Join top Fortune 500 companies using
BlackBox AI Enterprise.
Money-back guarantee
Enterprise-grade security
Response within 4 hours
Start with BLACKBOX
Join top Fortune 500 companies using
BlackBox AI Enterprise.
Money-back guarantee
Enterprise-grade security
Response within 4 hours
Start with BLACKBOX
Join top Fortune 500 companies using
BlackBox AI Enterprise.
Money-back guarantee
Enterprise-grade security
Response within 4 hours
©2025 BlackBox. 535 Mission Street, San Francisco, CA, USA
©2025 BlackBox. 535 Mission Street, San Francisco, CA, USA
©2025 BlackBox. 535 Mission Street, San Francisco, CA, USA







