runtime error

Exit code: 3. Reason: INFO: Started server process [1] INFO: Waiting for application startup. llama-server exited with code 1 Output: warning: no usable GPU found, --gpu-layers option will be ignored warning: one possible reason is that llama.cpp was compiled without GPU support warning: consult docs/build.md for compilation instructions HTTPS is not supported. Please rebuild with: -DLLAMA_BUILD_BORINGSSL=ON -DLLAMA_BUILD_LIBRESSL=ON or ensure dev files of an OpenSSL-compatible library are available when building. ERROR: Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/starlette/routing.py", line 694, in lifespan async with self.lifespan_context(app) as maybe_state: File "/usr/local/lib/python3.11/dist-packages/starlette/routing.py", line 571, in __aenter__ await self._router.startup() File "/usr/local/lib/python3.11/dist-packages/starlette/routing.py", line 671, in startup await handler() File "/home/user/app.py", line 599, in startup_event process, load_time = await start_llama_server(model_id, port) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/app.py", line 529, in start_llama_server raise RuntimeError("llama-server process died") RuntimeError: llama-server process died ERROR: Application startup failed. Exiting. Unclosed client session client_session: <aiohttp.client.ClientSession object at 0x7f242e3a6290>

Container logs:

Fetching error logs...