Thread-Based Request Handling — Chapter 6

When implementing a server, a programmer is faced with a dilemma as to whether to use threads or processes to handle requests. Both have advantages and disadvantages. From its very inception, MySQL has used threads.

In continuation below post

Threads Versus Processes
Perhaps the most important difference between a process and a thread is that a child thread shares the heap (global program data) with the parent, while a child process does not.

Advantages of Using Thread
• Reduced memory utilization. The memory overhead of creating another thread is limited to the stack plus some bookkeeping memory needed by the thread manager.
• No advanced techniques required to access server-global data. If the data could possibly be modified by another concurrently running thread, all that needs to be done is to protect the relevant section with a mutual exclusion lock or mutex.
• Creating a thread takes much less time than creating a process because there is no need to copy the heap segment, which could be very large.
• The kernel spends less time in the scheduler on context switches between
threads than between processes. This leaves more CPU time for the heavily
loaded server to do its job.

Disadvantages of Using Threads
• Programming mistakes are very expensive. If one thread crashes, it brings the whole server down.
• Threaded servers are notorious for synchronization bugs that are nearly impossible to duplicate in testing but happen at a very wrong time in production. The high probability of such bugs is a result of having a shared address space, which brings on a much higher degree of thread interaction.
• Mutex contention at some point can get out of hand. If too many threads try to acquire the same mutex at the same time, this may result in excessive context switching, with lots of CPU time spent in the kernel scheduler and very little left to do the job.
• 32-bit systems are limited to 4 GB address space per process. Since all threads share the same address space, the whole server is theoretically limited to 4 GB of RAM even when there is a lot more physical RAM available.
• The crowded 32-bit address space presents another problem. Each thread needs some room for its stack. When a stack is allocated, even if the thread does not use the majority of the allocated space, the address space of the server has to be reserved for it. Each new stack reduces potential room for the heap. Thus, even though there might be plenty of physical memory, it may not possible to have large buffers, to have a lot of concurrent threads, and to give each thread plenty of room for its stack at the same time.

Advantages of Using Forked Processes
• Programming mistakes are not so fatal. Although a definite possibility, it is not as easy for a rogue forked-server process to disrupt the whole server.
• Programming mistakes are much less likely. Most of the time, the programmer only needs to think of one thread of execution, undisturbed by possible concurrent intruders.

Much fewer bugs. If a bug happens once, it is usually fairly easy to duplicate it. With its own address space for each forked process, there is not much interaction between them.
• On a 32-bit system, the issue of running out of address space is usually not as acute.

Disadvantages of Using Forked Processes
• Memory utilization is suboptimal. Possibly large memory segments are copied unnecessarily when a child is forked.
• Special techniques are required to share data between processes. This makes it cumbersome to access the data global to the server.
• Creating a process requires more overhead in the kernel than creating a thread. One big performance hit is the need to copy the data segment of the parent process. Linux, however, cheats in this area by implementing what is called copy-on-write. The actual copy of a parent process page does not take place until the child or the parent modifies that page. Until then, both use the same page.
• Context switches between processes are more time-consuming than between threads because the kernel needs to switch the pages, file descriptor tables, and other extra context info. Less time is left for the server to do the actual work.

In summary, a threaded server is ideal when a lot of data needs to be shared between the connection handlers, and when the programming skills are not lacking.

Implementation of Request Handling

The server listens in the main thread for connections. For each connection, it allocates a thread to handle it. Depending on the server configuration settings and current status, the thread may be either created a new or dispatched from the thread cache. The client issues requests and the server satisfies them until the client sends a session-terminating command ( COM_QUIT ) or until the session ends abnormally. Upon terminating the client session, depending on the server configuration settings and status, the thread may either terminate or enter the thread cache to wait for another request dispatch.

Structures, Variables, Classes, and API

Perhaps the most important class for threads is THD, which is a class for thread descriptors. Nearly every one of the server functions inside the parser and optimizer accepts a THD object as an argument, and it usually comes first in the parameter list. Whenever a thread is created, its descriptor is put into a global thread list.
The list is mainly used for three purposes:
• To provide the data for the SHOW PROCESSLIST command
• To locate the target thread when executing the KILL command
• To signal all threads to terminate during the shutdown.

Thread Programming Issues

MySQL faces many of the same complications as other programs that depend on threads. Standard C Library Calls When writing code that can be concurrently executed by several threads, functions from external libraries must be called with extra care. There is always a chance that the called code uses a global variable, writes to a shared file descriptor, or uses some other shared resource without ensuring mutual exclusion. If this is the case, we must protect the call by a mutex.
While exercising caution, MySQL must also avoid unnecessary protection, or it will experience a decrease in performance. If the appropriate thread-safe counterpart is not detected, the protective mutex is enabled as the last resort.
Overall, MySQL saves itself a lot of thread-safety worries by implementing many standard C library equivalents in the portability wrapper in mysys and in the string library under strings. Even when C library calls are made eventually, they happen through a wrapper in most cases. If a call on some system unexpectedly turns out to lack thread safety, the problem can be easily fixed by adding a protective mutex to the wrapper.

Mutually Exclusive Locks (Mutexes)

In a threaded server, several threads may access shared data. If they do so, each thread must make sure the access is mutually exclusive. This is accomplished through mutually exclusive locks, otherwise known as mutexes. As the application’s degree of complexity increases, you face a dilemma as to how many mutexes to use, and which ones should protect what data. On one end of the spectrum, you could have a separate mutex for each variable. This has the advantage of reducing the mutex contention to the minimum, but it has a few problems.

On the other end of the spectrum is having a single mutex for everything. This makes it very simple for the programmer — get the lock when accessing a global variable, and release it when done. Unfortunately, this approach has a very negative impact on performance. Many threads would be unnecessarily made to wait while one was accessing some variable that the others did not need to have protected.
The solution is in some balanced grouping of the global variables and in having a mutex for each group. This is what is done in MySQL to solve this problem.

Read-Write Locks
A mutually exclusive lock is not always the best solution to protect concurrency-sensitive operations. Imagine a situation when a certain variable is modified by only one thread and only infrequently, but it is read by many others often. If we were to use a mutex, most of the time one reader would end up waiting for the other to finish reading even though it could have just executed concurrently. There is another type of lock that is more suitable for this situation: a read-write lock.
Read locks can be shared, while write locks are exclusive. Thus, multiple readers can proceed concurrently as long as there is no writer.

A threaded application is often faced with the problem of thread synchronization. One thread needs to know that the other has reached a certain state. POSIX threads provide a mechanism to accomplish this: condition variables. A thread waiting for a condition can call pthread_cond_wait( ), passing it the condition variable and the
mutex used in the given context. The call must also be protected by the same mutex.

The term preemption means interrupting a thread to give the CPU some other task. MySQL generally uses the “responsible citizen” approach to preemption. The preempting thread sets the appropriate flags, telling the thread being preempted that it needs to clean up and terminate or yield. At that point, it becomes the responsibility of the thread being preempted to notice the message and comply.




Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Navneet Ojha

I am Indian by birth, Punjabi by destiny. Humanity is my religion. Love to eat, travel, read books and my million dreams keep me alive.