aspose file tools*
The moose likes C / C++ and the fly likes Is Concurency dependent on CPU architecture? Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Languages » C / C++
Bookmark "Is Concurency dependent on CPU architecture?" Watch "Is Concurency dependent on CPU architecture?" New topic
Author

Is Concurency dependent on CPU architecture?

Ankit Yadav
Greenhorn

Joined: Jul 07, 2009
Posts: 12
i am bit confused with the concurrency implementation in CPU. As now a days single processors are multicored. So are the developing programs for multi-core same as of multiprocessor or any difference is there? moreover, can we implement our programs independent of processor's ability to multiprogram (i.e. either multicore or multiprocessor) ?
Anthony Aj Williams
author
Ranch Hand

Joined: Jun 10, 2011
Posts: 56
Ankit Yadav wrote:i am bit confused with the concurrency implementation in CPU. As now a days single processors are multicored. So are the developing programs for multi-core same as of multiprocessor or any difference is there? moreover, can we implement our programs independent of processor's ability to multiprogram (i.e. either multicore or multiprocessor) ?


In most respects a multicore processor behaves the same as multiple processors. The only essential difference is the cache architecture --- a multicore processor tends to share some level of cache between the cores, but less cache tends to be shared between multiple processors in the same system.

From a developer's perspective they are the same unless you are dealing with very low-level algorithms for communicating between threads.

When developing an application that uses concurrency, it is good to make the algorithm scale with the number of processors. You can explicitly control the number of threads you use by using std::thread::hardware_concurrency() to find the number of hardware threads supported by your system, and scaling appropriately. Alternatively, use a mechanism such as std::async() or a thread pool where the runtime determines the number of threads.

The same code will run on multicore processors, multi-processor systems, and single-core single-processor systems. However, if you have more threads that are ready to run than processor cores then the OS will typically switch between your threads, giving each a small time slice before moving to the next, and so on.


Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/
just::thread C++11 thread library http://www.stdthread.co.uk
Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk
15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976
Ankit Yadav
Greenhorn

Joined: Jul 07, 2009
Posts: 12
thanks for your valuable response. One more doubt I am having is that,
how does the different level/hierarchy of exclusive memory and shared-memory restricts/facilitates the implementation of concurrency?
Anthony Aj Williams
author
Ranch Hand

Joined: Jun 10, 2011
Posts: 56
Ankit Yadav wrote:thanks for your valuable response. One more doubt I am having is that,
how does the different level/hierarchy of exclusive memory and shared-memory restricts/facilitates the implementation of concurrency?


I am sorry, but I'm not sure I understand exactly what you are asking. Could you elaborate on what you mean by exclusive memory and shared memory. Are you asking about NUMA systems?
Ankit Yadav
Greenhorn

Joined: Jul 07, 2009
Posts: 12
not exactly NUMA, or may be, I have little knowledge about NUMA
i think i forgot to mention that I were talking in reference to multi-core/processor system, with having both shared & exclusive memory/cache.

I am asking that, how does the memory hierarchy say RAM, L1/L2/L3 cache, affects the concurrency model, especially in terms of performance & data integrity?
moreover, does having more leyers of either shared-memory or non-shared-memory has any effect over the performance of overall system?
Anthony Aj Williams
author
Ranch Hand

Joined: Jun 10, 2011
Posts: 56
Ankit Yadav wrote:not exactly NUMA, or may be, I have little knowledge about NUMA
i think i forgot to mention that I were talking in reference to multi-core/processor system, with having both shared & exclusive memory/cache.

I am asking that, how does the memory hierarchy say RAM, L1/L2/L3 cache, affects the concurrency model, especially in terms of performance & data integrity?
moreover, does having more leyers of either shared-memory or non-shared-memory has any effect over the performance of overall system?


C++11 assumes that all memory is accessible by all threads.

It is possible that different memory regions are "closer" to some cores than others, so accesses from that core are faster than from others. However, the C++11 model doesn't provide any facilities to determine which core your thread is running on, or provide processor/core affinity for threads or memory allocations. For that, you are left to the whim of your OS.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Is Concurency dependent on CPU architecture?