Proxy on Scala: secure and scalable access to the OpenAI API
When working with the OpenAI API in corporate and product projects, tasks often arise:
centralize API access
log requests and responses
restrict access by IP or token
provide access from restricted regions
These tasks are conveniently solved through an intermediate proxy service.
This article examines a real Scala 3 application that:
transparently proxies HTTP requests
does not change the API structure
easily deployed on a VPS
ready to expand
Solution Architecture
Request Flow:
Client → HTTPS (Apache/Nginx) → Scala proxy → OpenAI APIThe principle of operation:
the client sends a request to your domain
reverse-proxy accepts HTTPS
The Scala service proxies the request to OpenAI
the response is returned to the client
Project Setup (Scala + http4s)
build.sbt
val scala3Version = "3.3.6"
lazy val root = project
.in(file("."))
.settings(
name := "openai-proxy",
version := "0.1.0",
scalaVersion := scala3Version,
libraryDependencies ++= Seq(
"org.http4s" %% "http4s-dsl" % "0.23.30",
"org.http4s" %% "http4s-ember-server" % "0.23.30",
"org.http4s" %% "http4s-ember-client" % "0.23.30",
"org.typelevel" %% "cats-effect" % "3.5.4",
"org.typelevel" %% "log4cats-slf4j" % "2.7.0",
"ch.qos.logback" % "logback-classic" % "1.5.16",
"com.typesafe" % "config" % "1.4.3"
)
)Application Configuration
application.conf
server {
host = "127.0.0.1"
port = 8080
}
openai
base-url = "https://api.openai.com"
}AppConfig.scala
final case class AppConfig(
host: String,
port: Int,
baseUrl: String
)
object AppConfig {
def load(): AppConfig = {
val cfg = com.typesafe.config.ConfigFactory.load()
AppConfig(
host = cfg.getString("server.host"),
port = cfg.getInt("server.port"),
baseUrl = cfg.getString("openai.base-url").stripSuffix("/")
)
}
}Proxy implementation
The main logic is implemented in ProxyRoutes
Removing hop-by-hop headers
val HopHeaders = Set(
"Connection",
"Keep-Alive",
"Transfer-Encoding",
"Upgrade"
)
def cleanHeaders(headers: Headers): Headers =
Headers(headers.headers.filterNot(h => HopHeaders.contains(h.name.toString)))Request proxying
def proxy(client: Client[IO], baseUrl: String): HttpApp[IO] =
HttpApp[IO] { req =>
val targetUri = Uri
.unsafeFromString(baseUrl)
.withPath(req.uri.path)
.copy(query = req.uri.query)
val proxiedRequest = Request[IO](
method = req.method,
uri = targetUri,
headers = cleanHeaders(req.headers),
body = req.body
)
client.run(proxiedRequest).use { resp =>
Response[IO](
status = resp.status,
headers = resp.headers,
body = resp.body
).pure[IO]
}
}Logging
def logRequest (req: Required): I[Unit] =
IO.println (s)>>> > > > > > > > > req.uri.paths}")
def logResponse (state: State): IO[Unit] =
IO.println (s"<<< $status")Using:
for {
_ <- logRequest(req)
response <- client.run(proxiedRequest).use { resp =>
logResponse(resp.status) *>
IO.pure(Response[IO](resp.status).withEntity(resp.body))
}
} yield responseError handling
.handleErrorWith { e =>
IO.println(s"Error: ${e.getMessage}") *>
IO.pure(Response[IO](Status.BadGateway).withEntity("Proxy error"))
}Application Entry Point
object Main extends IOApp {
override def run(args: List[String]): IO[ExitCode] = {
val config = AppConfig.load()
EmberClientBuilder.default[IO].build.use { client =>
val app = proxy(client, config.baseUrl)
EmberServerBuilder.default[IO]
.withHost(ipv4"127.0.0.1")
.withPort(port"${config.port}")
.withHttpApp(app)
.build
.use(_ => IO.never)
}.as(ExitCode.Success)
}
}Query examples
Getting models:
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer sk-..."Chat completion:
curl http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Привет"}
]
}'Streaming:
curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Расскажи шутку"}],
"stream": true
}'Deployment to the server
systemd service:
[Service]
ExecStart=/usr/bin/java -jar app.jar
Restart=alwaysApache reverse proxy:
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/Practical improvements
Adding an access token:
val ProxyToken = "secret123"
def auth(req: Request[IO]): Boolean =
req.headers.get(CIString("X-Proxy-Token"))
.exists(_.head.value == ProxyToken)Simple query limitation:
var counter = 0
def limit(): IO[Unit] =
IO {
counter += 1
if (counter > 1000) throw new Exception("Rate limit exceeded")
}Health-check endpoint:
case GET -> Root / "health" =>
Ok("ok")When it is applicable
SaaS with LLM integration
Telegram bots
AI agents
internal corporate services
Conclusion
The Scala proxy is a compact and manageable solution for working with the OpenAI API.
It allows you to centralize access, control usage, and gradually expand functionality without changing client applications.
As the project develops, you can add authorization, billing, metrics, and support for multiple AI providers.