In this chapter, we’re going to take a quick detour to discuss parallelism in programming. This will help us better understand the next topic related to GUIs, which is event-driven programming. So, what is parallelism? In short, parallelism involves writing our programs in a way that they can perform multiple tasks simultaneously. We typically do this through the use of threads, contained within the process that our application is running under. This video will explain what all of that means.
First, let’s look at a process. A process is simply an instance of our application that is running on a computer. While you are watching this video, you probably have several processes running on your computer, such as a web browser or two, maybe a few chat programs, or even an antivirus program. Each of these are contained within a process, which is managed and scheduled by the operating system. In short, the operating system keeps track of all the running processes, and figures out how to run them on the available computing resources, such as the number of cores on our CPU and the amount of RAM available. Finally, as we’ll see, each individual process can contain multiple threads of execution, allowing each process to perform multiple separate tasks simultaneously.
If we want to explore the processes that are running on your computer, most modern operating systems provide a tool just for that purpose. In Windows, we would use the Task Manager program, shown here. This is an actual screenshot of my computer’s task manager while I was developing this lecture. As you can see, I had several applications open, including Google Chrome, Microsoft Teams, Visual Studio Code, and even the Pandora music player. Each of these applications consists of one or more running processes, indicated by the number in parentheses. We can also see some statistics about how many of our computing resources are being consumed by each process.
As we stated earlier, the operating system is responsible for scheduling these processes. A modern computer typically has a processor with multiple processing cores, but typically there are many more processes running than there are cores available on the system. So, the operating system must determine which processes are allowed to run at any given time, and then it will switch between those running processes thousands of times each second. To the user, this makes it appear like all of the programs are running simultaneously, and we’ll usually act as if they are even in theory, but in practice it is important to understand that each CPU core can typically only run one process at a time. So, if the computer only has a single processing core, then it is impossible to have anything run truly simultaneously. This diagram shows the various states that a process can be placed into by the operating system. When the process is executing on the processor, we say that it is in the “running” state. If the operating system decides to interrupt it and let another process run, it goes into the “waiting” state, meaning that it can be scheduled again by the operating system. However, if the process tries to access a file or memory that isn’t ready, it will be placed in the “blocked” state until that resource is available. Once it is, then it will shift back to the “waiting” state, ready to be scheduled by the operating system in the future. Finally, any process that is not currently running could also be swapped out of memory, especially if there is a limited amount of memory available on the system or another process is consuming large amounts of memory. So, processes that are either “waiting” or ‘blocked" could also be “swapped out” of memory, and must be “swapped in” before they can execute again.
Finally, let’s also talk about threads. Within a process, there can be one or more threads, sometimes known as “threads of execution” running at any given time. A thread is a connected sequence of instructions contained in the code. For example, think about tracing the execution of a program using your finger. If there is only one linear path through the code, then that application only has a single thread. Of course, if the code includes functions, recursion, or loops, it may look more complex, but you can always draw a single line that goes all the way through the code, from beginning to end. Up to this point, all of the programs we’ve developed in this program have only had a single thread, so this is what you should be used to at this point. Each thread starts with a function call, and the main thread usually starts with the
main() method in our code. Threads, like the processes they are contained in, are scheduled by the operating system to run on any available processor core. When a process contains multiple threads, the operating system will keep track of each individual thread as it schedules the process to execute on a core of the CPU. The biggest difference between threads and processes is that threads exist within a process, and therefore they have access to the shared memory available in the process. So, that means that multiple threads can access the same data, making them a powerful way to take large computational processes and divide them up to run faster across the many cores available in most modern computers.
There are also tools available to examine the running threads on your system. This is a screenshot from Process Explorer, a tool available for Windows 10 that gives more detail about each running process. In this screenshot, we can see that one of the processes for Visual Studio Code contains 28 individual threads. We can see statistics about each thread, including the actual function call used to start the thread. This is a really great way to explore the processes running on your system and better understand how they are structured.
As you might have guessed, writing programs that use multiple threads can add a whole new layer of complexity to our programs. In this chapter, we’ll briefly explain some of those concerns and give you the tools to work around them.