Loading HuntDB...

Disk Space Exhaustion leading to a Denial of Service (DoS)

Medium
C
curl
Submitted None
Reported by tryhackplanet

Vulnerability Details

Technical details and impact analysis

LLM04: Model Denial of Service
# Description The tool_debug_cb function can write large amounts of debug data to a log file if the --trace or --trace-ascii options are used with a large volume of data. If an attacker can cause cURL to download or upload a very large amount of data (e.g., via a very large HTTP response or an unlimited upload), the log file generated by this debug function can grow indefinitely. This can lead to disk space exhaustion on the system where cURL is running, which in turn can disrupt other services running on the same server. This section writes raw HTTP header data to the heads->stream if --dump-header is used. If heads->stream happens to be the same file as the trace output, or if it's another file with unlimited growth potential, it contributes to the problem. ``` /* In tool_debug_cb */ if(per->config->headerfile && heads->stream) { size_t rc = fwrite(ptr, size, nmemb, heads->stream); // <-- Vulnerable write if(rc != cb) return rc; /* flush the stream to send off what we got earlier */ if(fflush(heads->stream)) { errorf(per->config->global, "Failed writing headers to %s", per->config->headerfile); return CURL_WRITEFUNC_ERROR; } } ``` When global->tracetype == TRACE_PLAIN, this block handles text, headers, and data alerts. ``` /* In tool_debug_cb, in the TRACE_PLAIN section */ case CURLINFO_HEADER_OUT: if(size > 0) { size_t st = 0; size_t i; for(i = 0; i < size - 1; i++) { if(data[i] == '\n') { /* LF */ if(!newl) { log_line_start(output, timebuf, idsbuf, type); } (void)fwrite(data + st, i - st + 1, 1, output); // <-- Vulnerable write st = i + 1; newl = FALSE; } } if(!newl) log_line_start(output, timebuf, idsbuf, type); (void)fwrite(data + st, i - st + 1, 1, output); // <-- Vulnerable write } newl = (size && (data[size - 1] != '\n')); traced_data = FALSE; break; case CURLINFO_TEXT: case CURLINFO_HEADER_IN: if(!newl) log_line_start(output, timebuf, idsbuf, type); (void)fwrite(data, size, 1, output); // <-- Vulnerable write newl = (size && (data[size - 1] != '\n')); traced_data = FALSE; break; ``` ``` case CURLINFO_DATA_OUT: case CURLINFO_DATA_IN: case CURLINFO_SSL_DATA_IN: case CURLINFO_SSL_DATA_OUT: if(!traced_data) { // ... fprintf(output, "[%zu bytes data]\n", size); // <-- Vulnerable write newl = FALSE; traced_data = TRUE; } break; ``` This is the most significant contributor to the vulnerability when --trace-ascii or --trace-bin is used, as it meticulously logs every byte of transferred data (headers, actual data, SSL data) in a formatted way. ``` /* In function dump */ // ... fprintf(stream, "%s%s%s, %zu bytes (0x%zx)\n", timebuf, idsbuf, text, size, size); // <-- Vulnerable write (initial line) for(i = 0; i < size; i += width) { fprintf(stream, "%04zx: ", i); // <-- Vulnerable write (line prefix) if(tracetype == TRACE_BIN) { for(c = 0; c < width; c++) if(i + c < size) fprintf(stream, "%02x ", ptr[i + c]); // <-- Vulnerable write (hex data) else fputs(" ", stream); // <-- Vulnerable write (padding) } for(c = 0; (c < width) && (i + c < size); c++) { // ... fprintf(stream, "%c", ((ptr[i + c] >= 0x20) && (ptr[i + c] < 0x7F)) ? ptr[i + c] : UNPRINTABLE_CHAR); // <-- Vulnerable write (ASCII representation) // ... } fputc('\n', stream); // <-- Vulnerable write (newline character) } fflush(stream); ``` In essence, the vulnerability isn't a bug in the fwrite or fprintf calls themselves, but rather the lack of a protective wrapper or check around these calls to limit the cumulative data written to the trace log file. The fix involves adding that missing check. # POC 1. Unlimited data server setup [unli.py] ``` import http.server import socketserver import time PORT = 8002 class MyHandler(http.server.SimpleHTTPRequestHandler): def do_GET(self): print(f"[{time.ctime()}] Received GET request from {self.client_address[0]}") self.send_response(200) self.send_header("Content-Type", "application/octet-stream") self.send_header("Transfer-Encoding", "chunked") # FIX: Changed self0 to self self.end_headers() # Stream data continuously try: i = 0 while True: chunk = f"This is chunk {i}: {'A' * 1024}\n".encode('utf-8') # 1KB of data per chunk self.wfile.write(f"{len(chunk):X}\r\n".encode('ascii')) # Chunk size in hexadecimal self.wfile.write(chunk) self.wfile.write(b"\r\n") self.wfile.flush() i += 1 # Optional: add a small delay to observe file growth more easily # time.sleep(0.01) except Exception as e: print(f"[{time.ctime()}] Client disconnected or error: {e}") finally: # End chunked encoding self.wfile.write(b"0\r\n\r\n") self.wfile.flush() with socketserver.TCPServer(("", PORT), MyHandler) as httpd: print(f"serving at port {PORT}") try: httpd.serve_forever() except KeyboardInterrupt: print("\nServer shutting down.") ``` {F4562562} 2. Run curl with the trace option: Open a second terminal and execute the curl command. ``` curl http://localhost:8002/test.txt -o /dev/null --trace output.log ``` ``` └─# curl http://localhost:8002/test.txt -o /dev/null --trace output.log % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 37.0M 0 37.0M 0 0 7857k 0 --:--:-- 0:00:04 --:--:-- 7856k ``` You will now observe output.log rapidly increasing in size as the server continuously streams data and curl logs every bit of it. This demonstrates the Disk Space Exhaustion leading to a Denial of Service (DoS). # HOW TO FIX To address the disk space exhaustion vulnerability you've demonstrated, the primary fix needs to focus on the code sections that write data to the trace log file without any size limitations. In the code snippet you provided, the most relevant places for implementation are: Before every fwrite() or fprintf() operation directed to output (or heads->stream): This is where actual data is written to the file. You'll need to add a file size check here. Within the dump() function: The dump() function is responsible for writing the formatted binary or ASCII data. This is a critical location because large amounts of data are processed and written to the output here. Here are the specific areas in your code that would require modification: a. In tool_debug_cb(): At the beginning of the function, before any writes to output or heads->stream occur, you'd perform the check. ``` int tool_debug_cb(CURL *handle, curl_infotype type, char *data, size_t size, void *userdata) { struct OperationConfig *operation = userdata; struct GlobalConfig *global = operation->global; FILE *output = tool_stderr; // ... other variable declarations ... // --- START FIX --- // Ensure the trace stream is open and we have a size limit set if (global->trace_stream && global->trace_fopened && global->max_trace_log_size > 0) { // Estimate the size of the data to be written for this chunk // This `estimated_write_size` function would need to be created. // It should account for formatting overhead (e.g., hex dump takes more space). size_t estimated_write_size = estimate_formatted_data_size(type, size, global->tracetype); // Check if the current write would exceed the limit if (global->current_trace_log_size + estimated_write_size > global->max_trace_log_size) { // Close the file and disable tracing to prevent further writes warnf(global, "Trace log file '%s' reached maximum size (%s). Stopping further trace logging.", global->trace_dump, /* format size with a helper like tool_strbytel(global->max_trace_log_size) */); fclose(global->trace_stream); global->trace_stream = NULL; // Ensure no more writes happen global->trace_fopened = FALSE; // You might also want to disable the tracing option completely global->tracetype = TRACE_NONE; global->trace_dump = NULL; return 0; // Stop the callback from processing further } } // --- END FIX --- // ... rest of tool_debug_cb code ... // Section where headerfile is written: if(per->config->headerfile && heads->stream) { size_t rc = fwrite(ptr, size, nmemb, heads->stream); // --- START FIX (Conditional) --- // Update current_trace_log_size IF heads->stream is the designated trace_stream. // This might require a check if heads->stream is the same as global->trace_stream. // Alternatively, if `headerfile` is a distinct concept from `trace_dump`, // it might need its own size limit mechanism. For simplicity, we assume // 'output' is the main stream to monitor. if (heads->stream == global->trace_stream) { // Example condition global->current_trace_log_size += rc; } // --- END FIX --- if(rc != cb) return rc; // ... } // TRACE_PLAIN section (CURLINFO_HEADER_OUT, CURLINFO_TEXT, CURLINFO_HEADER_IN, etc.): // At each `fwrite` or `fprintf` location in this section: // After each successful `fwrite` or `fprintf` call: // global->current_trace_log_size += <number of bytes written>; // Example for CURLINFO_TEXT/HEADER_IN: // (void)fwrite(data, size, 1, output); // if (output == global->trace_stream) { // Check if output is the trace stream // global->current_trace_log_size += size; // } // ... // CURLINFO_DATA_OUT/IN/SSL_DATA_OUT/IN in TRACE_PLAIN: // fprintf(output, "[%zu bytes data]\n", size); // if (output == global->trace_stream) { // Check // global->current_trace_log_size += strlen("[X bytes data]\n") + (number of digits in size); // Approximate // } // ... Call to dump() ... // The dump() function will handle its own size increment internally, // or it could return the number of bytes written for tool_debug_cb to add. dump(timebuf, idsbuf, text, output, (unsigned char *) data, size, global->tracetype, type, global); // Pass global config to dump return 0; } ``` b. In the dump() function: dump() writes a lot of data, so it's an efficient place to update current_trace_log_size. You'll need to pass the GlobalConfig to dump to access current_trace_log_size. ``` static void dump(const char *timebuf, const char *idsbuf, const char *text, FILE *stream, const unsigned char *ptr, size_t size, trace tracetype, curl_infotype infotype, struct GlobalConfig *global) // Added global parameter { size_t i; size_t c; unsigned int width = 0x10; if(tracetype == TRACE_ASCII) width = 0x40; // --- START FIX --- // Add the size of the initial line to the total int written_bytes = fprintf(stream, "%s%s%s, %zu bytes (0x%zx)\n", timebuf, idsbuf, text, size, size); if (global && stream == global->trace_stream && written_bytes > 0) { global->current_trace_log_size += written_bytes; } // --- END FIX --- for(i = 0; i < size; i += width) { // --- START FIX --- // Check limit before writing each line if (global && stream == global->trace_stream && global->current_trace_log_size >= global->max_trace_log_size) { // If already over limit, just break and flush remaining break; } // --- END FIX --- // Write line header (e.g., "0000: ") written_bytes = fprintf(stream, "%04zx: ", i); if (global && stream == global->trace_stream && written_bytes > 0) { global->current_trace_log_size += written_bytes; } if(tracetype == TRACE_BIN) { for(c = 0; c < width; c++) if(i + c < size) { written_bytes = fprintf(stream, "%02x ", ptr[i + c]); if (global && stream == global->trace_stream && written_bytes > 0) { global->current_trace_log_size += written_bytes; } } else { written_bytes = fputs(" ", stream); if (global && stream == global->trace_stream && written_bytes > 0) { global->current_trace_log_size += written_bytes; } } } for(c = 0; (c < width) && (i + c < size); c++) { // ... (CRLF handling logic remains) ... written_bytes = fprintf(stream, "%c", ((ptr[i + c] >= 0x20) && (ptr[i + c] < 0x7F)) ? ptr[i + c] : UNPRINTABLE_CHAR); if (global && stream == global->trace_stream && written_bytes > 0) { global->current_trace_log_size += written_bytes; } // ... (CRLF handling logic) ... } written_bytes = fputc('\n', stream); // newline if (global && stream == global->trace_stream && written_bytes > 0) { global->current_trace_log_size += written_bytes; } } fflush(stream); } ``` # Additional Disabling Tracing: Once the limit is hit, it's crucial to effectively disable the tracing mechanism (e.g., by setting global->trace_stream = NULL and changing global->tracetype to TRACE_NONE) to prevent any further attempts to write. ## Impact System Instability and Crash (High Impact)

Report Details

Additional information and metadata

State

Closed

Substate

Spam

Submitted

Weakness

LLM04: Model Denial of Service