Skip to content
This repository was archived by the owner on Dec 18, 2018. It is now read-only.

ASP.NET Core MVC,CPUs Become Higher Slowly #2937

Closed
goldeneyezhang opened this issue Sep 17, 2018 · 51 comments
Closed

ASP.NET Core MVC,CPUs Become Higher Slowly #2937

goldeneyezhang opened this issue Sep 17, 2018 · 51 comments

Comments

@goldeneyezhang
Copy link

We deploy Asp.net Core MVC on Linux,CPUs become higher slowly
From Zabbix,we can see
cpu

.NET Core SDK (reflecting any global.json):
Version: 2.1.402
Commit: 3599f217f4

Runtime Environment:
OS Name: centos
OS Version: 7
OS Platform: Linux
RID: centos.7-x64
Base Path: /usr/share/dotnet/sdk/2.1.402/

Host (useful for support):
Version: 2.1.4
Commit: 85255dde3e

.NET Core SDKs installed:
2.1.402 [/usr/share/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.4 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.4 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.4 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

We use perf to collect info ,but have no idea.
sampleTrace.trace.zip
Any ideas on a possible cause?

@Tratcher
Copy link
Member

@sebastienros

@sebastienros
Copy link
Member

Any idea?

My first thought is that you have a memory leak, and the GC has to work more, or page to disk.
How does the memory profile look like compared to the CPU usage?

Also maybe try to get two traces, one when it's low CPU, and one when CPU is much higher. We can then to a diff and see what changed.

@goldeneyezhang
Copy link
Author

Hi,@sebastienros, memory is always stable.

@sebastienros
Copy link
Member

Are you using HttpClient in your code? If so are you reusing the instances or creating/disposing them every time?

If you can please share the double traces I suggested you to take.

@goldeneyezhang
Copy link
Author

Hi,@sebastienros
CPU high -- sampleTraceHight.trade.zip
CPU low -- sampleTraceLow.trace.zip
sampleTraceHigh.trace.zip

sampleTraceLow.trace.zip

We use HttpClient,
private static readonly HttpClient HttpClient;
private readonly IHttpClientFactory _httpClientFactory;

@muratg muratg changed the title Asp.net Core MVC,CPUs Become Higher Slowly ASP.NET Core MVC,CPUs Become Higher Slowly Sep 19, 2018
@marfenij
Copy link

@goldeneyezhang what transport do you use Sockets or Libuv ?

@gzpbx
Copy link

gzpbx commented Sep 23, 2018

@marfenij Sockets

@marfenij
Copy link

@gzpbx thanks, so we stay with libuv

@goldeneyezhang
Copy link
Author

I see a lot of threads is blocked ,use pstack,show epoll_wait

pstack 80514
Thread 1 (process 80514):
#0 0x00007f65475a1183 in epoll_wait () from /lib64/libc.so.6
#1 0x00007f653e7278e6 in SystemNative_WaitForSocketEvents () from /usr/share/dotnet/s hared/Microsoft.NETCore.App/2.1.4/System.Native.so
#2 0x00007f64d233bfd8 in ?? ()
#3 0x000000007e2b72a0 in ?? ()
#4 0x00007f6546ed4a28 in vtable for InlinedCallFrame () from /usr/share/dotnet/shared /Microsoft.NETCore.App/2.1.4/libcoreclr.so
#5 0x00007f5fef7fdc70 in ?? ()
#6 0x00007f64d28e6910 in ?? ()
#7 0x00007f64d28e6910 in ?? ()
#8 0x00007f5fef7fd810 in ?? ()
#9 0x00007f64d233bfd8 in ?? ()
#10 0x00007f5fef7fd880 in ?? ()
#11 0x00007f64d28e6910 in ?? ()
#12 0x00007f604036ddc8 in ?? ()
#13 0x0000000000006bdc in ?? ()
#14 0x0000000000000000 in ?? ()
@marfenij

@davidfowl
Copy link
Member

davidfowl commented Oct 1, 2018

Your high CPU trace shows the swapper taking up most of the time spent (if I'm reading this correctly)
image

cc @vancem

@vancem
Copy link

vancem commented Oct 1, 2018

@davidfowl the swapper is actually just the Idle loop. If you look at the functions where the CPU is being spent in swapper you will see

|  ||||+ kernel.kallsyms!arch_cpu_idle
|  |||| + kernel.kallsyms!__cpuidle_text_start
|  |||| |+ kernel.kallsyms!native_safe_halt

Which I am assuming is where the OS lets CPUS run when it has nothing better.

If you filter to just what is going on in the dotnet process, you see that 3/4 of the time is spend in System.private.Corelib (the main framework DLL), but we don't have symbols for methods inside.

Including @brianrob who can help with understanding why that did now work. However we should make sure you are following the instructions https://github.com/dotnet/coreclr/blob/master/Documentation/project-docs/linux-performance-tracing.md. In particular you have to set

export COMPlus_PerfMapEnabled=1
export COMPlus_EnableEventLog=1

so that these environment variables get inherited into the dotnet process. If you DIDN'T do this it will have the symptoms that you describe. Case of the variables matters (as always on Linux).

Finally there is a potential work-around using a very new (not even documented yet), feature of .NET 2.1 called EventPipe.

You use it by setting some the environment variable.

export COMPlus_EnableEventPipe=1

Then start the process, then let it run (it is OK that it runs a long time). Then you have to shut the process down cleanly (if you are and ASP.NET app, they have a Ctrl-C hander that does this. You shoud have a *.netperf file generated.

@davidfowl
Copy link
Member

Ah good to know 😄

@gzpbx
Copy link

gzpbx commented Oct 5, 2018

@vancem we know why we can't get the symbols. Use the export ComPlus_XXXX=1 and then restart supervisor that can't get the symbols , we must use the same shell to execute export and dotnet run.
We are on holiday when we go back to work we will provide the .netperf file with symbols.

For now , we have to restart the program every day... yes on production ...
image

@gzpbx
Copy link

gzpbx commented Oct 5, 2018

@vancem Are you sure about "export COMPlus_EnableEventPipe=1" when I execute this export thing and then start the process but failed to start the port (can't access from url).

The other 2 export commands are OK.

image

And the Ctrl+C doesn't OK either.

@sebastienros
Copy link
Member

@gzpbx Do you think you could share the application with me? I could try to repro the issue and get the appropriate traces. If possible you can send me a link by email (first.last at microsoft.com)

@gzpbx
Copy link

gzpbx commented Oct 6, 2018

@vancem I got the *.netperf file .
Process-20485.netperf.zip

Phase 1: All user requests on this VM
Phase 2: No any request on this VM but still has CPU load.
image

@gzpbx
Copy link

gzpbx commented Oct 6, 2018

@sebastienros, it's not that easy, the application can't run alone, it needs other services (some of them are WCF services). It's an internet web site.

@davidfowl
Copy link
Member

@gzpbx You seem to have a logger that's blocking thread pool threads on a Task.Wait. What is this?

 Name
   |+ module System.Private.CoreLib <<System.Private.CoreLib!ThreadPoolWorkQueue.Dispatch>>
   | + module LogApi.Extensions.Logging <<LogApi.Extensions.Logging!LogApi.Extensions.Logging.LogApi.MessageQueue`1[System.__Canon].ProcessItemLoop()>>
   |  + module System.Private.CoreLib <<System.Private.CoreLib!Task.Wait>>
   |  |+ BLOCKED_TIME
   |  + module System.Private.CoreLib <<System.Private.CoreLib!Task.Delay>>
   |   + BLOCKED_TIME

@davidfowl
Copy link
Member

Are you also making outbound http requests on a timer interval? What things does your application do when they are no requests?

@davidfowl
Copy link
Member

Ok trying to narrow it down:

+ module Wingon.Config.Client <<Wingon.Config.Client!WingonConfig.ReadConfig>>

@gzpbx
Copy link

gzpbx commented Oct 8, 2018

@davidfowl ,
Are you also making outbound http requests on a timer interval?
-- Yes, it's in WingonConfig.ReadConfig.
We will remove these code in next release and see what happens .

You seem to have a logger that's blocking thread pool threads on a Task.Wait. What is this?
It's sending logs and processing logs. 2 threads , both long running (some kind of while true)

@davidfowl
Copy link
Member

@gzpbx It's not really exactly long running from the looks of it:

image

This dispatch suggests that you kick off a work item to the thread pool then wait on it synchronously on another thread.

What does this loop look like? BTW I'm not saying this is the problem, I'm just looking for things that are happening off request threads.

@goldeneyezhang
Copy link
Author

            private void ProcessItemLoop()
	{   
                   int sleepCount = 0;
		int sleepTime = 1000;
		while (enabled)
		{
			try
			{
				if (queueWait.IsEmpty)
				{
					sleepCount++;
					Task.Delay(sleepTime).Wait();
				}
				else
				{
					
					int count = queueWait.Count();
					if (sleepCount >= 3 || count >= QueueWaitCount)
					{
						sleepCount = 0;
						sleepTime = 1000;
						T t;
						while (queueWait.TryDequeue(out t))
						{
							queueSend.Enqueue(t);
						}
					}
					else
					{
						sleepCount++;
						Task.Delay(sleepTime).Wait();
					}
				}
			}
			catch (Exception ex)
			{
				Console.WriteLine(ex.Message);
			}
		}

}

Hi,@davidfowl , we have a log queue, if no log,then delay.

@davidfowl
Copy link
Member

Change this:

Task.Delay(sleepTime).Wait();

To:

Thread.Sleep(sleepTime);

Task.Delay is only better if you're going to await it. If you're blocking a thread then I'd recommend sleeping.

Where is ProcessItemLoop kicked off? Is it a dedicated Thread?

@goldeneyezhang
Copy link
Author

Thanks,@davidfowl,we use it like this,

Task.Run(() => ProcessItemLoop());

@davidfowl
Copy link
Member

For long running work that blocks, you should spawn a thread manually or use Task.Factory.StartNew with TaskCreationOptions.LongRunning.

var thread = new Thread(ProcessItemLoop)
{
    Name = "Queue Processing Loop",
    IsBackground = true
};
thread.Start();

OR

Task.Factory.StartNew(ProcessItemLoop, TaskCreationOptions.LongRunning);

@gzpbx
Copy link

gzpbx commented Oct 8, 2018

@davidfowl ,
We fix our application as per your recommendation and publish it online (on production). Running it a few days see what happens.


The fix(change long running task and remove WingonConfig.ReadConfig) doesn't work. The cpu still go higher slowly, so ...
image

@davidfowl
Copy link
Member

Can you collect a new profile?

@gzpbx
Copy link

gzpbx commented Oct 9, 2018

This time I collect 3 profiles (Not using EventPipe, EventPipe needs Ctrl-C to generate *.netperf file but I need keep the application running in the 3 times collecting ) -

  1. Under user request (152_09_1420.trace.zip) 400MB+
  2. No any user request (152_09_1451_NoRequest.trace.zip) 100MB+
  3. No any user request after almost 1 hour (152_09_1544_NoRequest.trace.zip) 26MB

The weird thing is the second and third profiles (basically no user request) there are still many thread have cpu load (sum of sample).
The 2 long running threads (which under red lines) are very low cpu I don't think the cpu load going higher is because them .
image

@davidfowl
Copy link
Member

@gzpbx can you get a memory dump when there's no requests but high CPU? Maybe looking at a dump might reveal something. https://github.com/dotnet/coreclr/blob/21f94f2be12e16bb36187d40c2fa82af9ebfd469/Documentation/botr/xplat-minidump-generation.md

@davidfowl
Copy link
Member

The weird thing is the second and third profiles (basically no user request) there are still many thread have cpu load (sum of sample).

Looking at the 3rd profile (152_09_1544_NoRequest.trace.zip) I still see requests:

image

@gzpbx
Copy link

gzpbx commented Oct 9, 2018

Looking at the 3rd profile (152_09_1544_NoRequest.trace.zip) I still see requests:

Yeah, we have a monitor (in zabbix) to see the application is alive or not, 1 request/ 1 minute. I will get a pure No requesting dump.

@gzpbx
Copy link

gzpbx commented Oct 10, 2018

@davidfowl
Absolutely no http request (only have TCP level connection for VIP health check. ) dump 152_10_1131.trace.zip .

The cpu doesn't go down after on request.

There 4429856 objects, too many ? Our memory dump => coredump.1047.zip
image

@vancem
Copy link

vancem commented Oct 10, 2018

OK I have looked at all the data to date. First we are struggling the restrictions with our current capabilities (we know that they are problematic, and indeed we have fixed at least some of them in V2.2 but that does not help us just this minute). I am including @brianrob, to keep me honest.

Basically this investigation would be much easier if we had symbols for the framework. You have to do this rather ugly process explained here https://github.com/dotnet/coreclr/blob/master/Documentation/project-docs/linux-performance-tracing.md#resolving-framework-symbols
That gets crossgen next to your libcoreclr.so file. Did you do this? Right now it looks like this was not done (which is why we don't get symbols for System.Private.Corelib. Note that you only need to do this once on the machine that you are running on, and then all collections will work from then on.

Another work-around would be to suppress the use of precompiled images. If you added the variable

  • export COMPlus_ZapDisable=1, that should also fix it (it will affect startup time, but that should not matter for this one-off data collection).

If you collect more traces, please try that.

However I think I have figured out quite a bit with the data we already have.

The complaint is that there is CPU when there are not requests, and it gets worse with time.

Indeed the most useful data the EventPipe data, Process-20485.netperf.zip, which shows that in fact lots of time is being spent in the Timer class firing timers. This was reinforced by data from the last file 152_10_1131.trace.zip which in its 'events' view shows .NET runtime events that fire every time a timer goes off (currently you need to be an expert to know how to use these events). It shows that we are firing a timer 1200 times a second. This is clearly too much, and fixing that will likely fix the issue.

Also in addition to the ones that fire, there is so much overhead in the timer queue itself that it implies that there are MANY entries in the queue.

One very suspicious possibility is that timers are entered that have PERIODIC times specified. These never die until canceled, and thus could accumulate in the queue over time, leading to EXACTLY the symptom we see. This is my current theory.

Some of the timers are doing a 'ReadConfig' but these only go off once a minute, so are not interesting. Indeed non of the callbacks that I see in the trace go of frequently enough to explain things. My current theory is that the callback is so small that it does not show up on during a CPU sample.

Anyway we have a interesting line of attack. At this point we should put a breakpoint in the System.Threading.TimerQueue.CallCallback method as well as System.Threading.TimerQueue.UpdateTimer (this is called every time a new entry is added to queue of things that might go off). Lets just see what is being put in the queue. That will probably tell us what we need to know.

Vance

@vancem
Copy link

vancem commented Oct 10, 2018

Also, the Timer queue starts at the System.Threading.TimerQueue.Instances. This is an array (one per CPU), of TimerQueues. Each TimerQueue has a m_timers entry that is the linked list of entries. Each entry has a due time (in msec, as recorded by Environment.TickCount) and a callback. Our problem is that something is putting too many entries in the list.

Note that once an entry gets into the queue, if it has a periodic time it will stay in the queue until explicitly removed (and UpdateTimer is not called on each new due-time).

@davidfowl
Copy link
Member

davidfowl commented Oct 10, 2018

@gzpbx based on the dump and based on @vancem 's analysis System.Threading.TimerQueue.UpdateTimer look like the problem. You have 34,890 TimerQueueTimer instances, which looks like the problem.

@gzpbx
Copy link

gzpbx commented Oct 11, 2018

@vancem @davidfowl
Thank you for your answer. But In our code we don't have the code about TimerQueue or Timer. (Only one Timer which is doing a 'ReadConfig' but only this timer has been removed so you can't see it in the latest profile file 152_10_1131.trace.zip)

We can send you our source code If you feel OK . I am not sure the third component (DLLs) use that much Timer or TimerQueue but we don't use that much other DLLs.
FlightWebSite.csproj
image
Flight.Service.csproj
image

When I got the coredump.1047.zip file, the application has taken 35,374 api calls. 35,374 is close to 34,890 , is that possible each api call (our application will call an internal WCF service ) generate a Timer or TimeQueue ?

@davidfowl
Copy link
Member

Do you have any calls to Task.Delay or any CancellationTokens that timeouts? Take a look at the number of CancellationTokenSources or DelayPromise instances (if you have any)

@davidfowl
Copy link
Member

You could also try to gcroot the timer instances to see what is holding onto them (other than the timer queue itself)

@gzpbx
Copy link

gzpbx commented Oct 11, 2018

No Task.Delay (has been removed) but we have CancellationTokens -
image

About "the timer instances to see what is holding onto them" we are working on the coredump.1047.zip, It's real hard to investigate it on linux with only a few sos commands.

@davidfowl
Copy link
Member

davidfowl commented Oct 11, 2018

Why aren’t you disposing the cancellation token source after the outgoing http request is complete?

@gzpbx
Copy link

gzpbx commented Oct 11, 2018

OK, we can change the code with using but we always used that way in traditional Asp.Net MVC project and no problem. It should be GC.
image

@davidfowl
Copy link
Member

It does get cleaned up but after the timeout. I’m not sure why you didn’t see the issue before. It’s actually worse on .NET Framework (the timer is more optimized on .NET Core). You end spinning up lots of timers and because you’re not disposing the CTS, they will fire even if the http requests competes.

@gzpbx
Copy link

gzpbx commented Oct 11, 2018

Does the WCF client have the same timer issue ? We just add the code client.Close() & .Abort() in the new release (which will be released tonight local time)

image

@gzpbx
Copy link

gzpbx commented Oct 11, 2018

sos DumpHeap -stat
image
dumpheap -mt 00007fd92a890058
image
dumpobj 00007fd81dcbd728
image

gcroot 00007fd81dcbd728
(lldb) gcroot 00007fd81dcbd728
Thread 417:
    00007FFFA8021DA0 00007FD92A5709E9 System.Threading.Tasks.Task.SpinThenBlockingWait(Int32, System.Threading.CancellationToken) [/root/coreclr/src/mscorlib/src/System/Threading/Tasks/Task.cs @ 2978]
        rbp-38: 00007fffa8021db8
            ->  00007FD59C0C3378 <error>
            ->  00007FD49C02E800 <error>
            ->  00007FD49C0250D0 <error>
            ->  00007FD49C0250E8 <error>
            ->  00007FD49C1A5EA8 <error>
            ->  00007FD81C031018 <error>
            ->  00007FD81C01EBE0 <error>
            ->  00007FD81C0306E8 <error>
            ->  00007FD81C0307E0 <error>
            ->  00007FD81C030820 <error>
            ->  00007FD81C030AA8 <error>
            ->  00007FD81C030998 <error>
            ->  00007FD59C157388 <error>
            ->  00007FD59C157328 System.Threading.CancellationTokenSource
            ->  00007FD59C157708 System.Threading.CancellationTokenSource+CallbackPartition[]
            ->  00007FD59C157760 System.Threading.CancellationTokenSource+CallbackPartition
            ->  00007FD59C157798 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C157488 <error>
            ->  00007FD49C24FDC8 <error>
            ->  00007FD49C24FB30 <error>
            ->  00007FD49C24FCA0 <error>
            ->  00007FD49C279DD8 <error>
            ->  00007FD49C2799B8 <error>
            ->  00007FD49C279D48 <error>
            ->  00007FD59C156EC0 <error>
            ->  00007FD59C157110 System.Collections.Generic.List`1[[System.IDisposable, System.Private.CoreLib]]
            ->  00007FD59C157138 System.IDisposable[]
            ->  00007FD59C157238 System.Threading.CancellationTokenRegistration
            ->  00007FD59C1571E8 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C1570E8 System.Threading.ExecutionContext
            ->  00007FD59C157028 System.Threading.AsyncLocalValueMap+MultiElementAsyncLocalValueMap
            ->  00007FD59C157040 System.Collections.Generic.KeyValuePair`2[[System.Threading.IAsyncLocal, System.Private.CoreLib],[System.Object, System.Private.CoreLib]][]
            ->  00007FD59C02B0A8 <error>
            ->  00007FD81BFFF9B0 <error>
            ->  00007FD81BFFF988 <error>
            ->  00007FD81BFFF708 <error>
            ->  00007FD81BFFF780 <error>
            ->  00007FD71C042DC0 <error>
            ->  00007FD71C042D60 <error>
            ->  00007FD49C1BD400 <error>
            ->  00007FD49C1BCAA8 <error>
            ->  00007FD49C1BCAC0 <error>
            ->  00007FD49C1B8E00 <error>
            ->  00007FD49C1B8E48 <error>
            ->  00007FD59C0B1740 <error>
            ->  00007FD49C1B9328 <error>
            ->  00007FD49C1DC938 <error>
            ->  00007FD49C1DCA58 <error>
            ->  00007FD49C1DC6D0 <error>
            ->  00007FD49C1DC6A8 <error>
            ->  00007FD81C03F550 <error>
            ->  00007FD81C03F570 <error>
            ->  00007FD81C03F588 System.Threading.CancellationTokenSource
            ->  00007FD49C02ABD8 System.Threading.CancellationTokenSource+CallbackPartition[]
            ->  00007FD49C02AC30 System.Threading.CancellationTokenSource+CallbackPartition
            ->  00007FD59C0B3A20 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C0B39E0 System.Action`1[[System.Object, System.Private.CoreLib]]
            ->  00007FD59C0B39B8 <error>
            ->  00007FD59C0B3978 System.Action`1[[System.String, System.Private.CoreLib]]
            ->  00007FD59C0B38F0 <error>
            ->  00007FD59C0B3C70 <error>
            ->  00007FD59C0B3C50 <error>
            ->  00007FD59C0B3C10 <error>
            ->  00007FD59C0B3BF8 <error>
            ->  00007FD59C0B3BB8 <error>
            ->  00007FD59C0B3A70 <error>
            ->  00007FD59C0B1FF0 <error>
            ->  00007FD59C0B0F98 <error>
            ->  00007FD59C0B0E40 <error>
            ->  00007FD59C0B0B30 <error>
            ->  00007FD59C0AFE78 <error>
            ->  00007FD59C0AFE50 <error>
            ->  00007FD59C0AFD38 <error>
            ->  00007FD59C0ACC00 <error>
            ->  00007FD59C0A9070 <error>
            ->  00007FD59C0A88A0 <error>
            ->  00007FD59C0A7BC8 <error>
            ->  00007FD59C0A7B90 <error>
            ->  00007FD59C0A5630 <error>
            ->  00007FD59C0A55F0 <error>
            ->  00007FD59C0A5598 <error>
            ->  00007FD59C0A5578 <error>
            ->  00007FD59C0A5538 <error>
            ->  00007FD59C0A4930 <error>
            ->  00007FD59C0A39C0 <error>
            ->  00007FD59C0A4860 <error>
            ->  00007FD59C03ECE8 <error>
            ->  00007FD59C03ED70 <error>
            ->  00007FD59C039C08 <error>
            ->  00007FD59C039CD8 <error>
            ->  00007FD59C039870 <error>
            ->  00007FD59C0398E0 <error>
            ->  00007FD59C0399C8 <error>
            ->  00007FD59C039B18 <error>
            ->  00007FD59C038660 <error>
            ->  00007FD59C032408 <error>
            ->  00007FD59C031BA0 <error>
            ->  00007FD59C031AA8 <error>
            ->  00007FD59C02C590 <error>
            ->  00007FD81C02FEC8 <error>
            ->  00007FD81C030048 <error>
            ->  00007FD81C030088 <error>
            ->  00007FD49C278A10 <error>
            ->  00007FD49C277C18 <error>
            ->  00007FD49C278850 <error>
            ->  00007FD59C187038 System.Threading.CancellationTokenSource
            ->  00007FD59C1873E8 System.Threading.CancellationTokenSource+CallbackPartition[]
            ->  00007FD59C187440 System.Threading.CancellationTokenSource+CallbackPartition
            ->  00007FD59C18AFB0 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C187478 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C187150 <error>
            ->  00007FD59C04E408 <error>
            ->  00007FD59C04DE08 <error>
            ->  00007FD59C04DE58 <error>
            ->  00007FD59C189768 <error>
            ->  00007FD59C188DF0 <error>
            ->  00007FD59C1895E8 <error>
            ->  00007FD59C04F528 <error>
            ->  00007FD59C04FFA0 System.Collections.Generic.List`1[[System.IDisposable, System.Private.CoreLib]]
            ->  00007FD59C04FFC8 System.IDisposable[]
            ->  00007FD59C050380 System.Threading.CancellationTokenRegistration
            ->  00007FD59C050330 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C050080 System.Threading.CancellationTokenSource+CallbackPartition
            ->  00007FD59C05D5D0 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD59C05D0A0 <error>
            ->  00007FD59C038E08 <error>
            ->  00007FD59C038C90 <error>
            ->  00007FD59C038CE0 <error>
            ->  00007FD49C25BAC8 <error>
            ->  00007FD49C25B6A8 <error>
            ->  00007FD79C527C18 <error>
            ->  00007FD79C456648 <error>
            ->  00007FD79C527C48 System.Collections.Generic.List`1[[System.IDisposable, System.Private.CoreLib]]
            ->  00007FD79C527FB8 System.IDisposable[]
            ->  00007FD79C527CF8 System.Threading.CancellationTokenRegistration
            ->  00007FD79C527CA8 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD79C527C70 System.Threading.CancellationTokenSource+CallbackPartition
            ->  00007FD49C277920 System.Threading.CancellationTokenSource
            ->  00007FD49C278DB8 System.Threading.CancellationTokenSource+CallbackPartition[]
            ->  00007FD49C278E10 System.Threading.CancellationTokenSource+CallbackPartition
            ->  00007FD49C278E48 System.Threading.CancellationTokenSource+CallbackNode
            ->  00007FD49C278D18 System.Threading.ExecutionContext
            ->  00007FD49C278C58 System.Threading.AsyncLocalValueMap+MultiElementAsyncLocalValueMap
            ->  00007FD49C278C70 System.Collections.Generic.KeyValuePair`2[[System.Threading.IAsyncLocal, System.Private.CoreLib],[System.Object, System.Private.CoreLib]][]
            ->  00007FD49C25A640 <error>
            ->  00007FD49C259CC8 <error>
            ->  00007FD49C28A740 <error>
            ->  00007FD49C28A4D8 <error>
            ->  00007FD49C28A550 <error>
            ->  00007FD81C03C3C0 <error>
            ->  00007FD81C03C360 <error>
            ->  00007FD81C03B360 <error>
            ->  00007FD81C03BC38 <error>
            ->  00007FD81C03BC00 <error>
            ->  00007FD81C03B9E8 <error>
            ->  00007FD81C03CEC0 System.Action
            ->  00007FD81C03CE00 <error>
            ->  00007FD81C03BF40 <error>
            ->  00007FD81C03CF00 <error>
            ->  00007FD59C0B5DC8 <error>
            ->  00007FD59C0B3D50 <error>
            ->  00007FD59C0B3D30 <error>
            ->  00007FD49C1AAD70 <error>
            ->  00007FD49C1AFB38 <error>
            ->  00007FD59C0FFC28 <error>
            ->  00007FD59C0FDB70 <error>
            ->  00007FD59C0FF988 <error>
            ->  00007FD51C01E2A0 <error>
            ->  00007FD51C01E130 <error>
            ->  00007FD51C01E150 System.Object[]
            ->  00007FD59C065980 <error>
            ->  00007FD59C0659F0 <error>
            ->  00007FD59C05C2A0 <error>
            ->  00007FD59C05C028 <error>
            ->  00007FD59C05AD78 <error>
            ->  00007FD59C045768 <error>
            ->  00007FD49C1AAE00 <error>
            ->  00007FD49C1AAE38 System.Collections.Generic.Dictionary`2[[System.Object, System.Private.CoreLib],[System.Object, System.Private.CoreLib]]
            ->  00007FD59C05DC98 System.Collections.Generic.Dictionary`2+Entry[[System.Object, System.Private.CoreLib],[System.Object, System.Private.CoreLib]][]
            ->  00007FD49C24E978 <error>
            ->  00007FD49C24F2E0 <error>
            ->  00007FD49C24F320 <error>
            ->  00007FD49C24F4C8 <error>
            ->  00007FD81D7798A8 <error>
            ->  00007FD69E570548 <error>
            ->  00007FD69E570570 <error>
            ->  00007FD69E570368 <error>
            ->  00007FD69E570398 <error>
            ->  00007FD69E571398 <error>
            ->  00007FD69E571380 <error>
            ->  00007FD69E570FE0 <error>
            ->  00007FD69E5712F0 System.Threading.Timer
            ->  00007FD69E571368 System.Threading.TimerHolder
            ->  00007FD69E571308 System.Threading.TimerQueueTimer
            ->  00007FD61C9DDA28 System.Threading.TimerQueueTimer
            ->  00007FD51D575328 System.Threading.TimerQueueTimer
            ->  00007FD59DBCC468 System.Threading.TimerQueueTimer
            ->  00007FD81D5D41B8 System.Threading.TimerQueueTimer
            ->  00007FD79C8717B0 System.Threading.TimerQueueTimer
            ->  00007FD61C9A8B88 System.Threading.TimerQueueTimer
            ->  00007FD61C985148 System.Threading.TimerQueueTimer
            ->  00007FD79E254958 System.Threading.TimerQueueTimer
            ->  00007FD79E254048 System.Threading.TimerQueueTimer
            ->  00007FD71CADED48 System.Threading.TimerQueueTimer
            ->  00007FD71CACFBE8 System.Threading.TimerQueueTimer
            ->  00007FD59E459F58 System.Threading.TimerQueueTimer
            ->  00007FD81DCF2F00 System.Threading.TimerQueueTimer
            ->  00007FD69D900258 System.Threading.TimerQueueTimer
            ->  00007FD81DCBE068 System.Threading.TimerQueueTimer
            ->  00007FD81DCBD740 System.Threading.TimerQueueTimer
            ->  00007FD81DCBD418 <error>
            ->  00007FD81DCBD728 System.Threading.Timer

Found 1 unique roots (run 'gcroot -all' to see all roots).

@gzpbx
Copy link

gzpbx commented Oct 11, 2018

It looks like the CPU usage is stable after new publish today, this release version does't include CancellationTokenSources dispose commit but include WCFclient close & abort commit.

We need more time to confirm the fix is working.


Sure, it's working.
image

@davidfowl
Copy link
Member

You should dispose the CancellationTokenSource anyways. I'm not sure about the WCF client. Is that the generated client?

@sebastienros
Copy link
Member

Shouldn't wcf clients be handled the same way HttpClient is, reusing its instances?

@gzpbx
Copy link

gzpbx commented Oct 12, 2018

You should dispose the CancellationTokenSource anyways. I'm not sure about the WCF client. Is that the generated client?

Yes, basically the generated but we do some update (e.g. use our entities copy code from WCF Server)

@davidfowl
Copy link
Member

@gzpbx did you resolve this?

@gzpbx
Copy link

gzpbx commented Oct 19, 2018

Yes, you can close this issue. Thank you for your help.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants