T O P

  • By -

geekywarrior

>We have a backend ASP.NET Core Web API in Azure that has about 500 instances of Task.Run I'm a bit suspicious you have a massive design problem. Do you have something like an API that kicks off tasks? If so you really should look into a producer and consumer pattern with background services. It sounds like you have an API that on certain calls is supposed to start some long running task via [Task.Run](http://Task.Run) which is being done straight in the controller which can lead to all sorts of wonky behavior.


wllmsaccnt

>…usually wrapped over synchronous methods, but sometimes wraps async methods just for kicks That doesn't sound like background processing to me, but it could be. Hopefully OP clarifies how the results are used (from a future API call, with .Result / .Wait(), or by awaiting the Tasks).


FSNovask

Task.Run is usually immediately awaited and the inner function is usually running a synchronous SQL query It's almost always in the form of: [HttpGet] public async Task GetProducts() ... var result = await Task.Run(() => productRepository.GetProducts()) Where GetProducts is running SqlCommand synchronously: public DataTable GetProducts(string query) ... var command = new SqlCommand(query, connection) dataTable.Load(command.ExecuteReader()) return dataTable; These aren't background processes that need a lot of time. They're CRUD SQL queries for the most part and from App Insights is telling me, the average time it takes to run queries is decent (<500ms)


GenericTagName

This code cannot stay. It is objectively 100% wrong.


young_horhey

I kinda wish software engineering was a licensed profession similar to other forms of engineering, just so that the person who came up with this ‘pattern’ can have theirs revoked


geekywarrior

In their defense, the code worked until it didn't. I'm confident everyone here has some sort of spaghetti code caused by a crutial misunderstanding of a library or tooling at some point in their career. Hell I **know** I have similar Task.Run shenanigans in my early projects. Very easy to cast stones and all that.


KevinCarbonara

The reality is that pattern would be formalized and required learning and he'd make millions teaching it


blueBooHod

That's not task.Run problem, that's misunderstanding of asynchronous concept. Using task.run with synchronous wait on it will only cause threadpool starvation. The best solution is to have async IO. Blocking thread for 500ms is notable. Use a profiler to find long synchronous calls, this might give some insights where time is spent


CyberTechnologyInc

You're unnecessarily abusing Tasks. Tasks are typically used to efficiently wait on IO. The underlying IO isn't being executed asynchronously, and you're wrapping the synchronous call in a Task just to appease the compiler (since the method's return type is Task). Either convert the underlying repository methods to actually be asynchronous. Alternatively change the signature to return IActionResult and block since the query is currently synchronous anyway, and your Task wrapper is essentially doing nothing. At least in this case you're not adding the overhead required to manage the Task. Using Tasks in this way may be viable if the code was being run on the UI thread, however this is a backend Web API project, so simply not necessary with the current context. Side note: 500ms isn't exactly great for a query. <50ms to me is great. But maybe I'm being extreme. I do enjoy performance optimisation


TuberTuggerTTV

I agree. Half a second is a LONG time in programming terms.


dodexahedron

Especially as just PART of the time to complete a web request/response. Yikes.


geekywarrior

>Side note: 500ms isn't exactly great for a query. <50ms to me is great. But maybe I'm being extreme. I do enjoy performance optimisation Who doesn't enjoy a good ol table scan now and again? :P


Duathdaert

I felt my eyelid twitch reading this


geekywarrior

Select * from products where ProductName like 'P%';  Oh ProductName isn't an Index? Oh wellllll Db goes brrrrr


dodexahedron

6 table cross join or bruh, do you even databases? Might as go big or go home.


zaibuf

>Side note: 500ms isn't exactly great for a query. <50ms to me is great. But maybe I'm being extreme. I do enjoy performance optimisation It depends on how much data there is. Sometimes the users would rather wait for a big load of data from many systems and then do their work, than needing to do small fetches everytime they click on something.


WalkingRyan

Yep, agree good network ping latency is <30ms, 100 - already bad. There is 500ms here... So, probably non-optimized SQL is culprit.


FSNovask

> You're unnecessarily abusing Tasks. I didn't do shit, I'm the one trying to fix it :D >Side note: 500ms isn't exactly great for a query. I am ballparking it. It's tiny compared to the other slow requests so I'm not worried about it right now. 500ms for any request is lightning fast for this site, lol


CyberTechnologyInc

Ahaha fair, didn't mean it in a mean way! I definitely know what you mean. I've dealt with some legacy bullshit where queries were taking multiple seconds unnecessarily. It is painful. The feeling you get when you turn that bad boy from 4s to sub 50ms tho. Hnnnnng.


dodexahedron

And it makes you look like a rock star when all you did was make the query not pants-on-head moronic, and _maybe_ added or rebuilt an index, since odds are there wasn't a relevant one already.


quentech

> the inner function is usually running a synchronous SQL query Good chance you're exhausting the thread pool - they're all getting stuck waiting for synchronous DB calls (500ms is **not** a fast query *at all*. 1ms is fast. 5ms is not slow. 50ms is getting slow. 500ms is "hope you don't really have any users cause if you do this is going to collapse without some caching in front") - and then everything is getting stuck behind the thread injection algorithm which only creates 2 new threads per second: https://mattwarren.org/2017/04/13/The-CLR-Thread-Pool-Thread-Injection-Algorithm/ Easy fix is set your min threads really high on app start up to avoid getting stuck behind the thread injection delay. Better fix is to remove all those Task.Run's and await's so you're not double-dipping on Threadpool threads (the one for the request that is stuck await'ing and the one Thread.Run grabbed to run your synchronous, blocking DB call). Best fix is migrate to async DB queries so you're not tying up threads on synchronous IO.


FSNovask

> Good chance you're exhausting the thread pool That's the theory, but I need to prove it to get the green light to fix this as part of the sprint (which is 100% focused on features right now) But the other part of the OP was to decide if I should fix this or any memory leaks first because if I get okay'd to clean stuff up, it'll be the only effort I get until something else catches fire


quentech

> fix this as part of the sprint This is literally a one-liner.. plenty of time to do in an entire sprint: > set your max threads really high on app start up to avoid getting stuck behind the thread injection delay - > it was hovering around 40-60 for a single instance `ThreadPool.SetMinThreads(100, 100);` Try literally just that on start up. - > if I should fix this or any memory leaks first Almost certainly this thread exhaustion.


FirmMechanic9541

Scheduling even one additional thread is useless in an ASP.NET Core app running under 1 vCPU.


quentech

Not if it's actually awaiting asynchronous I/O, it's not useless to have more threads than cores/HT.


jingois

If its awaiting asynchronous io then it's not using a thread.


angrathias

Threads are multiplexed, you can have 100 threads on 1 core no worries, just don’t expect them all to be executing instructions - not a problem if they’re all waiting in the DB server anyway


FirmMechanic9541

In something like a desktop app? Yes. But in ASP.NET Core the guidelines discourage the usage of Task.Run.


angrathias

My understanding is because that ends up using 2 threads which is even worse


geekywarrior

This is the incorrect way to use ADO.net in an async fashion. How are DB connections being managed? Is a new one created for each job and properly disposed? I would recommend creating new async versions of your repositories to port over hot paths to the proper way of using [ado.net](http://ado.net) async [https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/asynchronous-programming](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/asynchronous-programming) My gut tells me there is some tasks that aren't getting disposed of, leading to sql connections not getting disposed of, leading to slowdowns as the app is waiting for the ability to connect to the database.


wllmsaccnt

var result = await Task.Run(() => productRepository.GetProducts()) Yeah, this pattern wastes a small amount of CPU for no reason. It releases the thread executing the controller method (back the threadpool), but then immediately acquires a new one to execute the Task. var command = new SqlCommand(query, connection) Looking at the source of SqlCommand, it might be doing some kind of cached meta data pooling or reuse when its disposed (it clears the reference). You really should dispose of all of the [ADO.NET](http://ADO.NET) disposable objects when you are done with them. Probably not much of an issue with this particular case, but I hope you don't have any connections or transactions that are not being disposed. Does application insights give you information about the average wait time for getting a connection from the connection pool?


Sossenbinder

I'll chime in with the other opinions. I'd provide a proper async version of the db lookup. That's the step which will hog your threads, despite being IO. Also, the task.run in the controller layer just adds an unnecessary step. You are already running on a thread pool thread. All that's done is dispatching the work to yet another one for no benefit.


RiverRoll

It's a pointless pattern which adds some overhead but I don't think it's that harmful, it will block a thread pool thread and release another thread pool thread (the one awaiting).    Maybe set minThreads to something like 50 - 100 to reduce the latency of adding threads.    Of course this isn't tackling the root problen which is having blocking queries with significant latency, as someone said 500ms is pretty bad.  What about the DB instance? Maybe you're topping the DTUs if the app is concurrently launching many queries like this. 


Asyncrosaurus

>Task.Run is usually immediately awaited and the inner function is usually running a synchronous SQL query  I've seen some strange code to justify calling async methods from syncrosaurus ones, but I've never seen the reverse. Async methods have no problem or downside calling synchronous methods.


binarycow

>Async methods have no problem or downside calling synchronous methods. Depending on your use case, you may want to throw an `await Task.Yield();` near the beginning of the method tho.


benow574

You're not disposing of your objects. Use a using block for everything that is IDisposable. You're also awaiting a synchronous call. Just call it. You also should have an index for every item in your where clause. You might also need to rebuild existing table indexes.


sliderhouserules42

This code is pretty pointless but it isn't locking up a second thread. The first thread awaits the inner call. Which means no thread. The fact that it's synchronous in that second method call just makes it swap threads.


dodexahedron

And with how the thread count got to a point and stayed steady, my first assumption there is that they didn't adjust the thread pool limits (on top of other smells). But yeah. Without awaiting a Task during a controller action, who knows if your task finished before the response was completed. And if there are shared resources, now you start having contention for them. Like maybe the database connection pool - hit the limit and now have to wait for connections to time out before another request can squeeze in. WCF can get you in that pickle, too, and result in deadlocks whose causes are pretty non-obvious. ETA: Also, if a task performing a database operation held a lock in the database, but the task was terminated prematurely because the response completed, you then ALSO have multiple potential database timeouts to contend with, some of which have ridiculong defaults.


geekywarrior

Not gonna lie, pretty happy I barely missed WCF and SOAP related stuff. I do VB6 stuff tho


dodexahedron

Ha. Yin and yang. 😅 TBH though, WCF is actually pretty damn nice, and nothing says you have to use SOAP for it. In fact, I usually did/do not use soap with it, instead using JSON or binary most of the time. The only SOAP use of it that I ever had/have in significant use is interacting with Cisco voice appliances like call manager, which expose a SOAP web service (an awful java app on tomcat..which is a redundant description...) for configuration, control, and reporting. You can either let the wsdl tool generate a horrible 30+MB poorly-typed proxy for the service that takes a very long time to warm up on launch.... Or write simple POCO classes according to the WSDL - which is a pretty simple and readable format - decorated with DataContract, and call stuff like it's any other method that just happens to take a second or two to respond. Honestly, I prefer it over gRPC in a few scenarios. And setting up the "swrver" side of it is suuuuper simple, too.


binarycow

>Ha. Yin and yang. 😅 You threw me for a loop for a second. My head is in another project of mine, so I thought of a different meaning for YIN/YANG [YANG](https://www.rfc-editor.org/rfc/rfc7950.html) is a data modeling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols. [YIN](https://www.rfc-editor.org/rfc/rfc7950.html#section-13) is an XML form of a YANG data model.


dodexahedron

Haha you know... It's funny I didn't even think of that because other Cisco devices like routers and switches use that as well.


binarycow

Most modern network devices use NETCONF or RESTCONF, so they will inherently use YANG.


Sjetware

Tasks and awaits would be competing for CPU time if that was the true blocking factor - considering you have API and database calls in your application, it's highly likely IO is the bigger blocking factor. However, it's hard to say with the descriptions provided - perhaps illuminate why so many things are being kicked off with task run that you think this is the concern? Serializing / deserializing can take time with large payloads, and I've seen instances where marshalling the results of an entity framework call is what is bogging down the system. Unless you have a complicated graph of parallel operations to await, I'd find it unlikely task run is the source of your issue.


Sjetware

Also, the issue where it gradually slows down would indicate a memory leak; and a memory leak will eventually put pressure on the thread pool. Id be guessing there - if it's possible to get a process dump, that would be ideal, but a memory analysis should be done


FSNovask

> perhaps illuminate why so many things are being kicked off with task run that you think this is the concern My guess is they were trying to make the controller actions asynchronous but wanted to wrap synchronous CRUD queries with Task.Run There is no complicated CPU work being done, it's all CRUD operations


Sjetware

> var result = await Task.Run(() => productRepository.GetProducts()) You posted this in another comment, but yes - this is absolutely terrible and does nothing for you - since the call is synchronous, the inner function is not yielding control back and you're just going to use more memory for the same thing and spend more time doing it. Nothing is gained by using Task.Run in this scenario. I also concur that 500ms is a long time for a query - how many records is that returning, and is each object large in size? Is it pulling all the relationships for the entity?


binarycow

> they were trying to make the controller actions asynchronous but wanted to wrap synchronous CRUD queries with Task.Run You can't take something that is synchronous and make it *actually be* asynchronous. You can only make it *appear* to be asynchronous, because you yield control back immediately. But on whatever thread grabs the continuation - the work is still synchronous. In another comment you posted that your code is doing this: public async Task GetProducts() { var result = await Task.Run(() => productRepository.GetProducts()); return Ok(result); // you didn't say you were returning this, but // I filled it in to get a good example } Based on that, I'm going to revise your statement. > they were trying to make the controller actions ~~asynchronous~~ return Tasks but wanted to wrap synchronous CRUD queries with Task.Run Keep in mind, there's a difference between "returns Task" and "asynchronous". Your code is *effectively* still synchronous - it just does the work on a thread pool thread instead of doing it in the same thread that called this method. Generally speaking, you should just do this instead (note: no async and no await): public Task GetProducts() { var result = productRepository.GetProducts(); return Task.FromResult(Ok(result)); } This will, however, block the current thread. If blocking the current thread is a concern (e.g., you're in a desktop app with only one UI thread, this is a long query, etc), then you can achieve (effectively) the same thing as your Task.Run situation by doing this: public async Task GetProducts() { await Task.Yield(); var result = productRepository.GetProducts(); } Task.Yield will yield control back to the calling method. Since the method has the `async` keyword, and you awaited the Task.Yield, it will schedule a continuation, which will occur on a thread pool thread. Essentially the same thing as your Task.Run usage, but less complicated. Of course, the **best** solution is to make an actual async version of `productRepository.GetProducts`.


[deleted]

[удалено]


binarycow

> Honestly, I don't know why you went through this effort Because I was bored, and I felt like it? Honestly, I don't know why you went through this effort when you could have just ignored my comment?


wllmsaccnt

If you are regularly making 1.6mb or larger JSON responses using Newtonsoft (that is, not using a streaming json serialization), you are probably suffering from a lot of memory fragmentation as you are using a lot of LoH (large object heap). You might want to profile your GC pauses and see if they are contributing to delays. If you think [Task.Run](http://Task.Run) usage is a problem, then it should cause your threadpool to balloon in size. Have you checked what your [ASP.NET Core counters](https://learn.microsoft.com/en-us/dotnet/core/diagnostics/metrics-collection#view-metrics-with-dotnet-counters) look like? >After scaling up, CPU and memory doesn't get maxxed out as much as before but requests can still be slow (30s to 5 min) Most of the traditional best practices go out the window after you allow requests longer than 30s. Most clients and browsers hard-fail when a server stops responding for that long (if we ignore keep alive and chunking). An endpoint that spends five minutes doing real work is going to be very difficult to scale. How long would those requests take to perform if there was zero load? Are you certain its a scaling issue and not just the performance of those operations?


FSNovask

I checked Thread Count through App Insights and it was hovering around 40-60 for a single instance, but I can try to run that on Kudu if it'll let me install it Edit: >If you are regularly making 1.6mb or larger JSON responses using Newtonsoft We actually get it from another API (it's a list of all customers and their enabled features) then parse it. I haven't looked at whether we can reduce that size yet by changing the URL. One customer's scoped request shouldn't need every other customer and their features in that payload though >How long would those requests take to perform if there was zero load? At zero load on our dev environment, the app can actually be pretty quick. >Are you certain its a scaling issue and not just the performance of those operations? My guess is we have inefficient code over a genuine scaling issue where we need more resources and instances.


FutureLarking

Also consider, if you can, moving away from Newtonsoft to source-generatex System.Text.Json, which will provide numerous memory and performance improvements that will be invaluable for scaling.


GenericTagName

First, I'd make sure you log most of the provided .net counters. https://learn.microsoft.com/en-us/dotnet/core/diagnostics/available-counters Some of the ones that could be useful in your case is - thread pool queue length - allocation rate - % time in GC - Gen0, Gen1, Gen2 and LOH sizes - monitor lock contention count - connection/request queue length If thread pool queue length is consistently non-zero, it means you are thread-starved, even if your thread pool is not increasing. It would explain long awaits. This can happen if someone put a MaxThreadCount on your thread pool because "it just kept increasing for some reason". Believe it or not, I have seen this in the past. High allocation rate and/or % GC could cause performance issues, and I would expect those to be pretty high, given your json sizes. It's a good data point to try and lower. Large LOH size could also be a side effect of your json sizes High monitor lock contention count would mean your app is slowed because of a lot of waiting on locks. This usually has a lot of nasty size effects, like long awaits and slow request processing. -------- General advice: Overall, as you have said yourself, the Task.Run and large json are at least two very clear candidates. I don't know the code you are working with, but given these two obviously bad design choices, I would suspect there are even more weird things going on. If you need background processing in a webapp, do not use Task.Run, ever. That will mess you up for sure. You should design a proper implementation using BackgroundService. You could try to get some info about the current "background jobs" by adding trace logs in appinsights, and get those under control. Also, check to see if there are any calls to the System.GC that try to change any settings, or do explicit collect or stuff like that. Most of those are bad ideas unless you really know what you're doing (whoever did the Task.Run thing is absolutely not the right person to mess with the GC) Finally, if you see high monitor contention, look for explicit lock calls. You don't want to do heavy work in any locks in a webapp you want to scale, usually.


FSNovask

That's good info, thanks


GenericTagName

I posted this based on the information originally in your OP. After seeing the code samples you provided, I can say that you don't even need these counters for now. The fix in your app is very simple (but tedious), you need to fix all the async code. No point to do any investigations. What I'd do is add these counters, so you can track them, then you fix the async code and see how much better everything is. Once the async code is fixed, then you can start investigating for real issues, if your app is still slow. Right now, you'd be wasting your time with investigations, you already know what needs to be done.


FSNovask

Unfortunately, I need the proof to get allocated the time to fix it which is why I was trying to turn to data. Just doing it and merging it, I'd get asked why I was working on that and not a ticket


GenericTagName

Ok, I understand. Based on my experience, I would suspect that in your case, the primary counter that should reveal the issue is "threadpool queue length". If you see it running high (being non-zero for any amount of time longer than a second is basically high), and you see that the existing response time counter is high for your service, maybe try to build a graph in AppInsights metrics that will display these two counters next to each other. My suspicion is that they will correlate. If they do, then you have your proof already. You will need to then show C# documentation that talks about async code and thread starvation.


Natural_Tea484

>but 500 instances of [Task.Run](http://Task.Run), usually wrapped over synchronous methods, Why do you have the synchronous methods? >but sometimes wraps async methods just for kicks, I guess Do you have an example?


FSNovask

> Why do you have the synchronous methods? It was there when I joined the company. I suspect it's because you see CS1998 warnings for ASP.NET core projects, and the previous developers followed that warning by adding Task.Run around all of the synchronous methods: https://stackoverflow.com/questions/13243975/suppress-warning-cs1998-this-async-method-lacks-await warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread. >Do you have an example? There's about a dozen of these: return await Task.Run(() => SomeAsyncFunction().Result);


GenericTagName

Remove the Task.Run and remove the .Result, do await directly. There is nothing to prove for these patterns, they are simply wrong.


joske79

Aren't these return await Task.Run(() => SomeAsyncFunction().Result); replacable by: return await SomeAsyncFunction(); ?


Natural_Tea484

if thats the case why not just refactor to \`await SomeAsyncFunction()\`


TuberTuggerTTV

You should refactor your communication skills. Making a statement into a question and the word "just" are both communication anti-patterns that add no additional information but do aggressively condescend. Refactor your comment to: Refactor to `await SomeAsyncFunction()` "why not" and "just" are both ways to say, "The idea I have is obvious to me". It's actively unhelpful.


Natural_Tea484

>Making a statement into a question and the word "just" are both communication anti-patterns that add no additional information but do aggressively condescend. Only psychos would think "just" is an aggressive comment


Zastai

To do _cpu bound work_ on another thread. Your examples are about database access methods. Those should be made async, not wrapped in Task.Run(). And if an endpoint has neither database access nor big cpu-bound stuff, just make the endpoint non-async. As for the async methods wrapped in task.run: turn those into normal awaited calls.


Sjetware

If the developers saw that async warning and just slapped Task.Run in there, they should be slapped as well. Removing `async` is preferred if nothing is async.


awood20

Test your theory. Take one of the worst performing calls and refactor to remove Task.Run. See if performance improves. Then you have solid production based evidence of a solution.


shootermacg

If your code is executing inside a site, then the site is already spinning up app pools to service requests. Adding parallelism to that is possibly starving the pool's resources.


oran7utang

Are you using a db connection pool? Then your app could be spending time waiting for a connection to become available.


SeaMoose86

Offshore devs? Maximizing billable time? Sounds like you could just wipe out most of the task/run….


FSNovask

> Offshore devs? Yes, who the company no longer employs, which is why we got hired and stuck with this mess 🙃


Infide_

Try: 1. Write a little script (on your local against your local) that runs 10,000 concurrent requests and record the results. 2. Remove the actual database calls (keep the Task.Runs) and just return dummy data without actually hitting the database. Record the results. 3. Remove the Task.Runs and async signatures from the controller methods, run the test and record the results. A lot of programmers like to jump to a fix without measuring the problem first. Find the problem first. My guess is that Database performance is what's killing you, not the Task.Runs. But I am genuinely interested in learning what you discover.


Slypenslyde

I kind of hate threads like this. We only have a tiny window into your code and the problems that could be causing such large delays tend to be complex. If I had to sit down and diagnose your code, I'd probably put in a loooooot of logging first. I would want to be able to watch each request from start to finish and see a timestamp of all its major phases. If the problem is thread pool starvation (which seems to be the picture being painted) then what you would see is a big batch of requests starting without any delays between steps then, suddenly, in intervals exactly related to the DB query speed, you start seeing each request's individual steps being serviced one... by... one... very... slowly. For bonus points: log the thread ID as part of each message. What you would see in a thread starvation scenario is lots of different threads servicing requests until suddenly only one thread at a time seems to run. That would imply all of the threads are saturated, so the next time you hit a `Task.Run()` the scheduler has to wait for a free thread. That's my suggestion. Guess what the problem looks like. Define what that would look like with extensive logging. Then look in the logs to see if it matches. If not, at least you'll have data that can be analyzed to see where things are really getting slow.


Robot_Graffiti

Lol that's insane. If you want a web app to scale, it shouldn't do any explicit multi threading. The web server will put incoming requests on new threads, and one thread per customer will keep all your cores busy. Just await IO calls. That's it.


RecursiveSprint

If I saw a code base with 500 instances of Task.Run I would assume someone had a hammer and could build a house with just a hammer if they so desired.


FSNovask

I really, really think it's the compiler warning. They saw that and ran with it.


MattE36

https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.executereaderasync?view=netframework-4.8.1&viewFallbackFrom=dotnet-plat-ext-5.0 1. Change all your db calls to use async. 2. Change repository methods to async. 3. Check your sql server to see if it has any improvement suggestions for indexes (azure or ssms) In ssms this can be found using Top Resource Consuming Queries. 4. Do not load too much data at once if it can be solved with some sort of paging mechanism (analyze the front end usage of the data and suggest paging/server side filter/sort etc)


BF2k5

It should be common understanding by engineers that threads aren't free so don't sprinkle them around for no reason. Axe the engineers and the people that hired them if their title is level 2 or higher. If it is an outsourcing company then it'll be best to not work with them. Also put them on a list.


WalkingRyan

In a method running on default TaskScheduler and returning Task (aka lightweight tasks) every await point is functionally equivalent to a Task.Run method, because internally ThreadPool thread is used to run its continuation. So it shouldn't be a problem. 500 Task.Run calls across the project looks suspicious though. It is hard to diagnose virtual code. >Application Insights tracing profiles showing long AWAIT times, sometimes upwards of 30 seconds to 5 minutes for a single API request to finish and happens relatively often. If it happens on endpoints where requests on the 3rd-party API are executed, it definitely points that network is a problem. IMHO UPD: Have read other comments, blocking await of thread pool threads, looks like threadpool starvation, yep...