Python 界有条不成文的准则: 计算密集型任务适合多进程,IO 密集型任务适合多线程。本篇来作个比较。
通常来说多线程相对于多进程有优势,因为创建一个进程开销比较大,然而因为在 python 中有 GIL 这把大锁的存在,导致执行计算密集型任务时多线程实际只能是单线程。而且由于线程之间切换的开销导致多线程往往比实际的单线程还要慢,所以在 python 中计算密集型任务通常使用多进程,因为各个进程有各自独立的 GIL,互不干扰。
而在 IO 密集型任务中,CPU 时常处于等待状态,操作系统需要频繁与外界环境进行交互,如读写文件,在网络间通信等。在这期间 GIL 会被释放,因而就可以使用真正的多线程。
以上是理论,下面做一个简单的模拟测试: 大量计算用 math.sin() + math.cos() 来代替,IO 密集型用 time.sleep() 来模拟。 在 Python 中有多种方式可以实现多进程和多线程,这里一并纳入看看是否有效率差异:
1.多进程: joblib.multiprocessing, multiprocessing.Pool, multiprocessing.apply_async, concurrent.futures.ProcessPoolExecutor
2.多线程: joblib.threading, threading.Thread, concurrent.futures.ThreadPoolExecutor
from multiprocessing import Pool
from threading import Thread
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import time, os, math
from joblib import Parallel, delayed, parallel_backend
def f_IO(a): # IO 密集型
time.sleep(5)
def f_compute(a): # 计算密集型
for _ in range(int(1e7)):
math.sin(40) + math.cos(40)
return
def normal(sub_f):
for i in range(6):
sub_f(i)
return
def joblib_process(sub_f):
with parallel_backend("multiprocessing", n_jobs=6):
res = Parallel()(delayed(sub_f)(j) for j in range(6))
return
def joblib_thread(sub_f):
with parallel_backend('threading', n_jobs=6):
res = Parallel()(delayed(sub_f)(j) for j in range(6))
return
def mp(sub_f):
with Pool(processes=6) as p:
res = p.map(sub_f, list(range(6)))
return
def asy(sub_f):
with Pool(processes=6) as p:
result = []
for j in range(6):
a = p.apply_async(sub_f, args=(j,))
result.append(a)
res = [j.get() for j in result]
def thread(sub_f):
threads = []
for j in range(6):
t = Thread(target=sub_f, args=(j,))
threads.append(t)
t.start()
for t in threads:
t.join()
def thread_pool(sub_f):
with ThreadPoolExecutor(max_workers=6) as executor:
res = [executor.submit(sub_f, j) for j in range(6)]
def process_pool(sub_f):
with ProcessPoolExecutor(max_workers=6) as executor:
res = executor.map(sub_f, list(range(6)))
def showtime(f, sub_f, name):
start_time = time.time()
f(sub_f)
print("{} time: {:.4f}s".format(name, time.time() - start_time))
def main(sub_f):
showtime(normal, sub_f, "normal")
print()
print("------ 多进程 ------")
showtime(joblib_process, sub_f, "joblib multiprocess")
showtime(mp, sub_f, "pool")
showtime(asy, sub_f, "async")
showtime(process_pool, sub_f, "process_pool")
print()
print("----- 多线程 -----")
showtime(joblib_thread, sub_f, "joblib thread")
showtime(thread, sub_f, "thread")
showtime(thread_pool, sub_f, "thread_pool")
if __name__ == "__main__":
print("----- 计算密集型 -----")
sub_f = f_compute
main(sub_f)
print()
print("----- IO 密集型 -----")
sub_f = f_IO
main(sub_f)
结果:
----- 计算密集型 -----
normal time: 15.1212s
------ 多进程 ------
joblib multiprocess time: 8.2421s
pool time: 8.5439s
async time: 8.3229s
process_pool time: 8.1722s
----- 多线程 -----
joblib thread time: 21.5191s
thread time: 21.3865s
thread_pool time: 22.5104s
----- IO 密集型 -----
normal time: 30.0305s
------ 多进程 ------
joblib multiprocess time: 5.0345s
pool time: 5.0188s
async time: 5.0256s
process_pool time: 5.0263s
----- 多线程 -----
joblib thread time: 5.0142s
thread time: 5.0055s
thread_pool time: 5.0064s