c# - WCF and IIS performance spawning thousands of tasks using Task Parallel library -


my scenario:

a user login triggers wcf-server kicks off 20 async-tasks. each separate task (named job in code) calling external soap-services. 1 000 simultaneous users logging in, means 20 000 async tasks. must call services in batches (due external limitations), fetch paged data - allowed in parallel. every 1 of 20 tasks spawns 10 tasks of own, mean 400 tasks per individual login - 400 000 tasks 1 000 simultaneous logins.

my 2 questions:

  1. how impact our iis , server performance? understand tasks queued , run in parallell if possible - there limits on recommended number of tasks?

  2. am using right approach in creating these tasks? running async (except waitall)? see code below:

create tasks each service called

foreach (var job in jobs) {     task.factory.startnew(() => job.fetch()); } 

the job-class called above:

public async void fetch() {     var batchlist = await fetchbatches();     //saves list database     mergeandsavebatchlist(batchlist); }  private async task<batchresult> fetchbatches() {     var tasklist = new list<task<batchresult>>();     foreach (var batch in _batcheslist)     {         //this calling external services         tasklist.add(task.factory.startnew(() => batch.fetch()));     }     await task.whenall(tasklist);     return tasklist.select(tl => tl.result); } 

a bit of simple maths:

all windows os products since windows xp use iana suggested range dynamic/ephemeral ports.

every tcp/ip connection requires ephemeral port receive response. when connection done, held in time_wait state 120 seconds before being freed reuse.

the iana range of ephemeral ports 49152 65535, total of 16383 epehmeral ports.

this means under optimal conditions, server can process 16383 connections every 2 minutes.

because of http connection pooling, there might not direct correlation between requests , connections, i'd extremely worried architecture requires many requests result in port exhaustion.

now, if you're making requests same service, you'll against different limit... connection limit http requests same host, defaults 10 on server. when go on limit, stuff going queued... result in unacceptable latency when queue thousands of requests same host. can fiddle limit, set high, , remote server deny requests.

can't reduce number of requests required in architecture? figures quote rather high.


Popular posts from this blog

c# - ODP.NET Oracle.ManagedDataAccess causes ORA-12537 network session end of file -

matlab - Compression and Decompression of ECG Signal using HUFFMAN ALGORITHM -

utf 8 - split utf-8 string into bytes in python -