-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue when sending large responses for concurrent requests #37680
Comments
FWIW 0df581c introduced this change in behavior. |
Thanks @mscdex for replying! So is this behavior intended? If so, is there a recommendation on how to achieve the expected behavior? |
There might be some unneeded writes for empty chunks introduced by 0df581c as per #27709 (comment) but the issue here is different. Before 0df581c the After 0df581c the |
Just to clarify, so is that true that we haven't seen this issue reported before? This issue happened in our production servers, and then got reproduced on our dev mac laptops. If it helps to investigate, we kind of solved this issue by doing both of the following:
With these two workarounds, we were able to achieve the "expected behavior" pasted above. It seems to us that event loop getting blocked is a problem here, and it has something to do with large response flushing. |
Are you using keep-alive connections? If so it makes sense. Because the client socket is the same so the second response has to wait for the data of the first response to be flushed. Some blocking is expected because you are synchronously serializing and sending 55 MiB of data for every request. |
They are different file descriptors according to Node.js Diagnostic report (https://nodejs.org/api/report.html) so I don't think second response needs to wait for first response to finish flushing.
|
It is more or less consistent with const net = require('net');
function makeBigObject(leaves, depth) {
if (depth === 0) {
return 'howdy';
} else {
const ret = {};
for (let i = 0; i < leaves; ++i) {
ret[i] = makeBigObject(leaves, depth - 1);
}
return ret;
}
}
const bigObject = makeBigObject(2000, 2);
let connectionCount = 0;
let firstConnectionStartTime;
function getTimeMs() {
return Date.now() - firstConnectionStartTime;
}
function handleConnection({ connectionIndex, socket }) {
socket.resume();
if (connectionIndex === 1) {
firstConnectionStartTime = Date.now();
}
console.log(
`[${getTimeMs()}] Serializing data for connection ${connectionIndex}...`
);
const serializedBigObject = JSON.stringify(bigObject);
const flushStartTimeMs = Date.now();
socket.on('finish', function () {
const flushDurationMs = Date.now() - flushStartTimeMs;
console.log(
`[${getTimeMs()}] -- Took ${flushDurationMs}ms to flush data for connection ${connectionIndex} --`
);
});
console.log(
`[${getTimeMs()}] Sending ${Math.round(
serializedBigObject.length / 1024 / 1024
)}MB data for connection ${connectionIndex}...`
);
socket.end(serializedBigObject);
console.log(
`[${getTimeMs()}] - Handler done for connection ${connectionIndex} -`
);
}
const server = net.createServer(function (socket) {
handleConnection({ connectionIndex: ++connectionCount, socket})
});
server.listen(function () {
const port = server.address().port;
for (let i = 0; i < 5; i++) {
net.createConnection({ port }).resume();
}
}); so, if anything, it is not an HTTP issue. |
18.04.1-Ubuntu SMP Sat Feb 6 01:56:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux, or Darwin xxx 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.161/RELEASE_X86_64 x86_64What steps will reproduce the bug?
How often does it reproduce? Is there a required condition?
Send multiple concurrent requests to the app. For example,
(for x in $(seq 5); do time curl --unix-socket /tmp/sock http://localhost/ > /dev/null & done)
What is the expected behavior?
I expect that responses are flushed fairly fast and early requests don't "wait" for later requests to flush together
[0] Processing request 1...
[23] Serializing response for request 1...
[390] Sending 55MB response for request 1...
[536] - Handler done for request 1 -
[537] Processing request 2...
[547] -- Took 158ms to flush response for request 1 --
[558] Serializing response for request 2...
[925] Sending 55MB response for request 2...
[1065] - Handler done for request 2 -
[1065] Processing request 3...
[1077] -- Took 152ms to flush response for request 2 --
[1086] Serializing response for request 3...
[1444] Sending 55MB response for request 3...
[1568] - Handler done for request 3 -
[1568] Processing request 4...
[1579] -- Took 135ms to flush response for request 3 --
[1589] Serializing response for request 4...
[1948] Sending 55MB response for request 4...
[2075] - Handler done for request 4 -
[2075] Processing request 5...
[2087] -- Took 139ms to flush response for request 4 --
[2096] Serializing response for request 5...
[2455] Sending 55MB response for request 5...
[2581] - Handler done for request 5 -
[2591] -- Took 136ms to flush response for request 5 --
What do you see instead?
Earlier requests were blocked and not making progress until the later requests finished processing/serializing. The more concurrent requests, the longer the delay and the worse the problem is.
[0] Serializing response for request 1...
[334] Sending 55MB response for request 1...
[491] - Handler done for request 1 -
[492] Serializing response for request 2...
[820] Sending 55MB response for request 2...
[957] - Handler done for request 2 -
[958] Serializing response for request 3...
[1292] Sending 55MB response for request 3...
[1430] - Handler done for request 3 -
[1432] Serializing response for request 4...
[1763] Sending 55MB response for request 4...
[1902] - Handler done for request 4 -
[1910] Serializing response for request 5...
[2229] Sending 55MB response for request 5...
[2371] - Handler done for request 5 -
[2536] -- Took 773ms to flush response for request 4 --
[2538] -- Took 309ms to flush response for request 5 --
[2539] -- Took 1719ms to flush response for request 2 --
[2540] -- Took 2206ms to flush response for request 1 --
[2540] -- Took 1248ms to flush response for request 3 --
Additional information
if replacing res.send(serializedBigObject) with res.send("ok"), then you get expected behavior. So response size matters.
[0] Serializing response for request 1...
[341] Sending 55MB response for request 1...
[345] - Handler done for request 1 -
[346] -- Took 5ms to flush response for request 1 --
[348] Serializing response for request 2...
[660] Sending 55MB response for request 2...
[661] - Handler done for request 2 -
[661] -- Took 1ms to flush response for request 2 --
[661] Serializing response for request 3...
[976] Sending 55MB response for request 3...
[977] - Handler done for request 3 -
[977] -- Took 1ms to flush response for request 3 --
[977] Serializing response for request 4...
[1296] Sending 55MB response for request 4...
[1297] - Handler done for request 4 -
[1297] -- Took 1ms to flush response for request 4 --
[1297] Serializing response for request 5...
[1633] Sending 55MB response for request 5...
[1633] - Handler done for request 5 -
[1634] -- Took 1ms to flush response for request 5 --
The text was updated successfully, but these errors were encountered: