Thread Pools

Definition

A thread pool is a collection of pre-created reusable worker threads, designed to avoid the performance overhead of repeatedly creating and destroying threads.

Implementation

Data Members

            
// Pool.h

vector<thread> mWorkers;

queue<function<void()>> mTasks;
mutex mMtxQueue;

condition_variable mCondition;

atomic<bool> mStop;
            
          

Pool’s Data Members

  • mWorkers stores the worker threads that will run tasks.
  • mTasks holds pending tasks as callable objects.
  • mMtxQueue ensures thread-safe access to the task queue.
  • mCondition notifies worker threads when new tasks are available.
  • mStop signals threads to stop during shutdown.

Constructor

            
// Pool.cc

Pool::Pool(int size)
  : mStop(false)
{
  for (int i = 0; i < size; ++i)
  {
    mWorkers.emplace_back(
      [this]() {
        while (true)
        {
          function<void()> task;
          
          {
            unique_lock<mutex> lock(mMtxQueue);
            mCondition.wait(
              lock,
              [this]() { return mStop || !mTasks.empty(); }
            );

            if (mStop && mTasks.empty())
              return;
          
            task = move(mTasks.front());
            mTasks.pop();
          }

          task();
        }
      }
    );
  }
}
            
          

Pool’s Constructor

Pool launches a specified number of threads on its construction. Each thread enters an infiite loop where it waits on a conditional variable, blocking until the pool is stopping or tasks are available. Once notified, the thread locks the task queue, checks the exit condition, and if tasks remain, it pops the next task and executes it. This design ensures that the threads remain idle without tying down the processor resources and exit cleanly when the pool is shut down.

Task Submission

            
// Pool.h

template <class F, class... Args>
void Enqueue(F&& f, Args&&... args)
{
  auto task = bind(forward<F>(f), forward<Args>(args)...);

  {
    lock_guard<mutex> lock(mMtxQueue);
    mTasks.emplace(move(task));
  }

  mCondition.notify_one();
}
            
          

Pool’s Method for Task Submission

Pool::Enqueue() is a templated function that allows tasks of any callable type, along with their arguments, to be submitted to the thread pool. It uses std::bind() in combination with perfect forwarding to package the function and its arguments into a single callable task. This task is then safely pushed onto the task queue using std::lock_guard to ensure exclusive access. After the task is enqueued, the method signals one waiting worker thread via notify_one(), prompting it to wake up and execute the task. This design supports flexible task submission while maintaining thread-safe coordination.

Destructor

            
// Pool.cc

Pool::~Pool()
{
  mStop = true;

  mCondition.notify_all();

  for (thread& worker : mWorkers)
    worker.join();
}
            
          

Pool’s Destructor

The destructor safely shuts down the thread pool by setting mStop = true. mCondition.notify_all() wakes any threads waiting on the condition variable so they can check the mStop flag and exit the loop. Finally, it calls join() on each worker thread to ensure they finish execution before the program continues or exits, preventing undefined behavior.

A Use Case

Maze Generation

Maze generation is the process of algorithmically creating a maze with a solvable path from start to finish. It is often used in games, simulations, and procedural content creation.

Recursive Backtracker

A recursive backtracker is a depth-first search algorithm, often implemented by using recursion or an explicit stack.

Attribution

David Barr’s Programming Mazes is a great resource and starting point to understand how to implement a maze generation algorithm using a recursive backtracker. I tweaked the classic version of the algorithm to spawn new threads at forks, leveraging a thread pool to generate different sub-regions of the maze in parallel.

Video

Maze Generation in Action

Following are the timestamps to different segment of the demonstration:

  • 00:00 — Single-threaded maze genereation
  • 02:07 — Using 2 threads to generate a maze
  • 03:18 — Using 4 threads to generate a maze
  • 04:05 — Using 8 threads to generate a maze
  • 04:32 — Using 16 threads to generate a maze