多线程与多进程,学习笔记15
分类:计算机编程

1.线程经过
进度:程序并无法独立运维,只有将前后相继装载到内部存款和储蓄器中,系统为它分配能源技艺运维,而这种推行的次序就称为过程,不持有施行感念,只是程序各个财富集聚

线程参谋文书档案

    线程:

线程:线程是操作系统能够进行演算调节的矮小单位。它被含有在进度之中,是经过中的实际运作单位。一条线程指的是进程中一个纯粹顺序的调节流,二个进度中得以并发五个线程,每条线程并行施行分歧的义务

线程是操作系统能够实行演算调治的纤维单位,它被含有在进度中,是进程中的实际运作单位

    什么是线程?

2.线程与经过的分别

多少个进度实际能够由多个线程的进行单元构成。每一种线程都运维在进度的内外文中,并分享形似的代码和大局数据。

    线程是操作系统能够进行演算调整的渺小单位。它被含有在进度之中,是进度中的实际运作单位。一条线程指的是进度中五个十足顺序的调节流,一个进度中得以并发多个线程,每条线程并行实践区别的职分

线程分享内部存款和储蓄器空间,                                                                                               进度的内部存款和储蓄器是独立的

出于在其实的网络服务器中对互相的急需,线程成为进一层首要的编制程序模型,因为八线程之间比多进程之间更便于分享数据,同不经常间线程平日比进度更加高效


  • 线程是操作系统能够实行演算调解的小小单位。它被含有在进程中,是经过中的实际运作单位。一条线程指的是进程中二个单纯顺序的调整流,三个历程中国科高校并发八个线程,每条线程并行奉行不相同的任务。
  • OS调整CPU的小不点儿单位|线程:一群指令(调控流),线程是肩负实施的指令集
  • all the threads in a process have the same view of the memory在同叁个进程里的线程是分享同一块内部存款和储蓄器空间的

  • IO操作不占用CPU(数据读取存款和储蓄),总结操作占用CPU(1 1...)
  • python七十六线程,不相符CPU密集型操作,切合IO密集型操作

    每三个顺序的内部存款和储蓄器是单独的,彼此无法直接访谈。

线程共享成立它的历程的地址空间;                                                                        进度具备自身的地址空间。

进程

    进程:

线程能够一向访谈其进度的数据段;                                                                        进度具备父进度的数据段的要好的别本。

程序并不可能独立和平运动作独有将先后装载到内部存款和储蓄器中,系统为他分配财富才具运作,而这种施行的前后相继就叫做进度。

    以三个完全的情势暴露给操作系统处理,里面含有对各个能源的调用,内存的对种种能源管理的集聚就足以称呼进程。进度自己是不可以实行的,只是一群指令,操作系统是线程推行的。

线程可以直接与其进程的别的线程通讯;                                                                 进度必需选拔进度间通讯与手足进程展开通讯。

## 程序和经过的分别在于:程序是命令的聚众,它是经过的静态描述文本;进度是程序的二次奉行活动,归于动态概念。

    表面看经过在施行,其实是线程在实践,贰个历程最少含有叁个线程。

新线程非常轻易创立;                                                                                                  新流程需求再行父流程。

经过是操作系统对二个正在运维的次序的黄金年代种浮泛。即经过是计算机,主存,IO设备的肤浅

    线程:线程就是可施行的上下文,CPU施行所急需的小小单位。CPU只担任运算。单核的CPU同不常间只可以做风流罗曼蒂克件业务,为啥我们得以切换种种程序,是由于CPU的实践进度急忙,在往返切换,让我们看起来程序是施行三个进程。

线程能够对相通进程的线程进行格外程度的支配;                                                   进度只可以对子进程展开销配。

操作系统能够同一时候运转多少个进程,而种种进程都好像在独自据有的接纳硬件


  • 各样程序在内部存款和储蓄器里都分配有独立的空间,暗中同意进度间是不能互相访谈数据和操作的
  • (QQ,excel等)程序要以二个总体的款式暴光给操作系统管理,里面包含各个能源的调用(调用内部存款和储蓄器的治本、互连网接口的调用等),对种种财富管理的汇集就足以称为进度。
  • 比方整个QQ就可以叫做一个进程
  • 经过要操作CPU(即发送指令),必需先成立一个线程;
  • 经过本人无法履行,只是财富的会集,想要推行必得先生成操作系统举行调节运算的蝇头单元-》线程;三个进度要推行,必得起码存有二个线程。当创造二个经过时,会活动创设二个线程

    操作系统通过PID,进度ID来分别进度。进度标记符,PID。进度能够设置优先级。

对主线程的变动(撤消,优先级改进等)可能会潜濡默化进度别的线程的表现;            对父进度的改观不会影响子进度。

进度和线程的不相同?

  • 线程分享创立它的进程的地址空间,进度的内部存款和储蓄器空间是单独的
  • 多个线程直接访谈数据经过的多寡,数据时共享的;一个父进程中的多少个子进度对数码的拜见其实是克隆,相互之间是单身的。
  • 线程能够平昔与创设它的历程的此外线程通讯;八个父进度的子进程间的通讯必需透过一个个中代理来完成
  • 新的线程轻松创设;创制新进度需求对其父进程张开叁次克隆
  • 线程能够对创造它的长河中的线程进行调整和操作,线程之间从未实际的附属关系;进程只好对其子进度展费用配和操作
  • 对主线程的改变(撤废、优先级改革等)恐怕影响进度的任何线程的一颦一笑;对经过的退换不会影响子进程

    线程是有主线程制造的,primary thread;能够直接创立新的线程,Linux操作系统有叁个主线程。

 

十二线程并发的例子

import threading,time

def run(n)
    print("task",n)
    time.sleep(2)

t1 = threading.Thread(target=run,args=("t1",))#target=此线程要执行的代码块(函数);args=参数(不定个数参数,只有一个参数也需要加`,`,这里是元组形式)
t2 = threading.Thread(target=run,args=("t2",))
t1.start()
t2.start()
  • 初叶两个线程
    ```python
    import threading,time

def run(n)
print("task",n)
time.sleep(2)

start_time = time.time()
for i to range(50)
t = threading.Thread(target=run,args=("t%s" %i ,))
t.start()

print('const',time.time()-start_time)
```

  • 这里计算的实践时间比2秒小超多,因为主线程和由它运转的子线程是并行的

  • join()等候线程实施完成再持续也正是wait
    ```python
    import threading
    import time

def run(n):
print('task:',n)
time.sleep(2)

start_time = time.time()
thread_list = []
for i in range(50):
t = threading.Thread(target=run,args=(i,))
t.start()
#万后生可畏这里参与t.join(卡塔尔(英语:State of Qatar)则等待每种线程实施达成再开展下叁个线程,多线程形成了串行
thread_list.append(t)

for t in thread_list:
t.join()#在线程运营后(start),参与join,等待全数成立的线程实施实现,再履行主线程

print('cont:',time.time()-start_time)
print(threading.current_thread(),threading.active_count())

    线程和经过的差距:

3.一条经过至稀少一条线程

threading.current_thread(卡塔尔国展现当前经过,threading.active_count(卡塔尔国当前行程活跃个数

```

  • 此间结果为2秒多或多或少,总结时间准确,用于此情景时,join(卡塔尔国必需在有着线程的start(卡塔尔(قطر‎之后,不然成为多线程串行,多线程就无意义了

    线程和经过比快是从未可以对比的性质的。

4.线程锁
    每一种线程在要改进公共数据时,为了防止本人在还未改完的时候外人也来改革此数量,能够给这一个数据加意气风发把锁, 那样任何线程想改良此数据时就必得等待你改改完毕并把锁释放掉后本领再拜候此数据

守护线程

  • 不加jion(卡塔尔国时,主线程和子线程是并行的,线程之间是并行关系;加了join(卡塔尔(قطر‎,加了join(卡塔尔的线程实施完结才会继续其余线程
  • 设为【守护线程】,主线程不等待子线程实施达成,直接实施;程序会等主线程试行完结,但不会等待守护线程
    ```python
    import threading
    import time

def run(n):
print('task:',n)
time.sleep(2)

start_time = time.time()
thread_list = []
for i in range(50):
t = threading.Thread(target=run,args=(i,))
t.t.setDaemon(True)#安装为照管线程,必需在start此前
#守护=》仆人,守护主人(主进度/线程),主人down了,守护的奴婢直接结束
t.start()
thread_list.append(t)
print('cont:',time.time()-start_time)

    1、线程分享内部存款和储蓄器空间,进程的内部存款和储蓄器是单独的;

 

主线程不是守护线程(也不得设置为守护线程),不等待子线程(设置为护理线程卡塔尔(英语:State of Qatar)等待2秒的小运,直接执行最后一句print(卡塔尔国

```

    2、同三个历程的线程之间能够直接交换,七个进度想通讯,必需经过三个中档代理来兑现;

5.Semaphore(信号量)

线程锁

  • 种种线程在要更改公共数据时,为了幸免自个儿在还未改完的时候别人也来改革此数据,能够给那个数目加豆蔻梢头把锁, 那样任何线程想改过此数据时就必得等待你改改完结并把锁释放掉后技术再拜望此数据。
  • 线程锁将线程变为串行

    def run(n):

    lock.acquire()#创建锁
    global num
    num  =1
    lock.relsase#释放锁
    

    lock = threading.Lock(卡塔尔(英语:State of Qatar)#实例化锁 for i in range(50卡塔尔(قطر‎:

    t = threading.Thread(target=run,args=(i,))
    t.start()
    

    print('num:',num)

    3、新的线程轻便成立,创建新线程必要对其父进度张开叁遍克隆;(parent process)

    互斥锁 同期只允许四个线程改过数据,而Semaphore是还要同意一定数额的线程改进数据 ,比如厕全体3个坑,那最四只同意3个人上洗手间,前边的人只好等中间有人出来了能力再进来。

RLock(递归锁)

  • 多层锁的时候利用,说白了就是在贰个大锁中还要再饱含子锁
    ```python
    import threading,time

def run1():
print("grab the first part data")
lock.acquire()
global num
num =1
lock.release()
return num
def run2():
print("grab the second part data")
lock.acquire()
global num2
num2 =1
lock.release()
return num2
def run3():
lock.acquire()
res = run1()
print('--------between run1 and run2-----')
res2 = run2()
lock.release()
print(res,res2)

if name == 'main':

num,num2 = 0,0
lock = threading.RLock()
for i in range(10):
    t = threading.Thread(target=run3)
    t.start()

while threading.active_count() != 1:
print(threading.active_count())
else:
print('----all threads done---')
print(num,num2)
```

    4、多个线程能够垄断和操作同风流浪漫进程里的别样线程,但是经过只好操作子进度;

 

信号量(Semaphore)

  • 互斥锁(线程锁) 同偶然间只同意贰个线程校勘数据,而Semaphore是同期允许一定数量的线程改正数据 ,举例厕全体3个坑,那最四只允许3个人上厕所,前边的人只可以等内部有人出来了手艺再步向。
  • 每释放一个锁,立刻进三个线程(举个例子socket_server中的并发数约束)

    import threading,time

    def run(n):

    semaphore.acquire()
    time.sleep(1)
    print("run the thread: %sn" %n)
    semaphore.release()
    

    if name == 'main':

    num= 0
    semaphore  = threading.BoundedSemaphore(5) #最多允许5个线程同时运行
    for i in range(20):
        t = threading.Thread(target=run,args=(i,))
        t.start()
    

    while threading.active_count() != 1:

    pass #print threading.active_count()
    

    else:

    print('----all threads done---')
    print(num)
    

    5、线程之间数据足以沟通,进度之间是不准数据沟通的。

6.join的效果是 等待线程施行完结

世襲式二十三十二线程

  • 貌似不用

    线程源代码:

 

经过类的格局=》八十二十四线程

import threading,time

class MyThread(threading.Thread)
    def __inin__(self,n)
        super(MyThread,self).__init__(n)
        self.n = n

    def run(self)#这里方法名必须为run
        print("running task",self.n)
        time.sleep(2)

t1 = MyThread(1)
t2 = MyThread(2)
t1.start()
t2.start()

 

7.练习

"""Thread module emulating a subset of Java's threading model."""

import sys as _sys
import _thread

from time import monotonic as _time
from traceback import format_exc as _format_exc
from _weakrefset import WeakSet
from itertools import islice as _islice, count as _count
try:
    from _collections import deque as _deque
except ImportError:
    from collections import deque as _deque

# Note regarding PEP 8 compliant names
#  This threading model was originally inspired by Java, and inherited
# the convention of camelCase function and method names from that
# language. Those original names are not in any imminent danger of
# being deprecated (even for Py3k),so this module provides them as an
# alias for the PEP 8 compliant names
# Note that using the new PEP 8 compliant names facilitates substitution
# with the multiprocessing module, which doesn't provide the old
# Java inspired names.

__all__ = ['active_count', 'Condition', 'current_thread', 'enumerate', 'Event',
           'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Barrier',
           'Timer', 'ThreadError', 'setprofile', 'settrace', 'local', 'stack_size']

# Rename some stuff so "from threading import *" is safe
_start_new_thread = _thread.start_new_thread
_allocate_lock = _thread.allocate_lock
_set_sentinel = _thread._set_sentinel
get_ident = _thread.get_ident
ThreadError = _thread.error
try:
    _CRLock = _thread.RLock
except AttributeError:
    _CRLock = None
TIMEOUT_MAX = _thread.TIMEOUT_MAX
del _thread


# Support for profile and trace hooks

_profile_hook = None
_trace_hook = None

def setprofile(func):
    """Set a profile function for all threads started from the threading module.

    The func will be passed to sys.setprofile() for each thread, before its
    run() method is called.

    """
    global _profile_hook
    _profile_hook = func

def settrace(func):
    """Set a trace function for all threads started from the threading module.

    The func will be passed to sys.settrace() for each thread, before its run()
    method is called.

    """
    global _trace_hook
    _trace_hook = func

# Synchronization classes

Lock = _allocate_lock

def RLock(*args, **kwargs):
    """Factory function that returns a new reentrant lock.

    A reentrant lock must be released by the thread that acquired it. Once a
    thread has acquired a reentrant lock, the same thread may acquire it again
    without blocking; the thread must release it once for each time it has
    acquired it.

    """
    if _CRLock is None:
        return _PyRLock(*args, **kwargs)
    return _CRLock(*args, **kwargs)

class _RLock:
    """This class implements reentrant lock objects.

    A reentrant lock must be released by the thread that acquired it. Once a
    thread has acquired a reentrant lock, the same thread may acquire it
    again without blocking; the thread must release it once for each time it
    has acquired it.

    """

    def __init__(self):
        self._block = _allocate_lock()
        self._owner = None
        self._count = 0

    def __repr__(self):
        owner = self._owner
        try:
            owner = _active[owner].name
        except KeyError:
            pass
        return "<%s %s.%s object owner=%r count=%d at %s>" % (
            "locked" if self._block.locked() else "unlocked",
            self.__class__.__module__,
            self.__class__.__qualname__,
            owner,
            self._count,
            hex(id(self))
        )

    def acquire(self, blocking=True, timeout=-1):
        """Acquire a lock, blocking or non-blocking.

        When invoked without arguments: if this thread already owns the lock,
        increment the recursion level by one, and return immediately. Otherwise,
        if another thread owns the lock, block until the lock is unlocked. Once
        the lock is unlocked (not owned by any thread), then grab ownership, set
        the recursion level to one, and return. If more than one thread is
        blocked waiting until the lock is unlocked, only one at a time will be
        able to grab ownership of the lock. There is no return value in this
        case.

        When invoked with the blocking argument set to true, do the same thing
        as when called without arguments, and return true.

        When invoked with the blocking argument set to false, do not block. If a
        call without an argument would block, return false immediately;
        otherwise, do the same thing as when called without arguments, and
        return true.

        When invoked with the floating-point timeout argument set to a positive
        value, block for at most the number of seconds specified by timeout
        and as long as the lock cannot be acquired.  Return true if the lock has
        been acquired, false if the timeout has elapsed.

        """
        me = get_ident()
        if self._owner == me:
            self._count  = 1
            return 1
        rc = self._block.acquire(blocking, timeout)
        if rc:
            self._owner = me
            self._count = 1
        return rc

    __enter__ = acquire

    def release(self):
        """Release a lock, decrementing the recursion level.

        If after the decrement it is zero, reset the lock to unlocked (not owned
        by any thread), and if any other threads are blocked waiting for the
        lock to become unlocked, allow exactly one of them to proceed. If after
        the decrement the recursion level is still nonzero, the lock remains
        locked and owned by the calling thread.

        Only call this method when the calling thread owns the lock. A
        RuntimeError is raised if this method is called when the lock is
        unlocked.

        There is no return value.

        """
        if self._owner != get_ident():
            raise RuntimeError("cannot release un-acquired lock")
        self._count = count = self._count - 1
        if not count:
            self._owner = None
            self._block.release()

    def __exit__(self, t, v, tb):
        self.release()

    # Internal methods used by condition variables

    def _acquire_restore(self, state):
        self._block.acquire()
        self._count, self._owner = state

    def _release_save(self):
        if self._count == 0:
            raise RuntimeError("cannot release un-acquired lock")
        count = self._count
        self._count = 0
        owner = self._owner
        self._owner = None
        self._block.release()
        return (count, owner)

    def _is_owned(self):
        return self._owner == get_ident()

_PyRLock = _RLock


class Condition:
    """Class that implements a condition variable.

    A condition variable allows one or more threads to wait until they are
    notified by another thread.

    If the lock argument is given and not None, it must be a Lock or RLock
    object, and it is used as the underlying lock. Otherwise, a new RLock object
    is created and used as the underlying lock.

    """

    def __init__(self, lock=None):
        if lock is None:
            lock = RLock()
        self._lock = lock
        # Export the lock's acquire() and release() methods
        self.acquire = lock.acquire
        self.release = lock.release
        # If the lock defines _release_save() and/or _acquire_restore(),
        # these override the default implementations (which just call
        # release() and acquire() on the lock).  Ditto for _is_owned().
        try:
            self._release_save = lock._release_save
        except AttributeError:
            pass
        try:
            self._acquire_restore = lock._acquire_restore
        except AttributeError:
            pass
        try:
            self._is_owned = lock._is_owned
        except AttributeError:
            pass
        self._waiters = _deque()

    def __enter__(self):
        return self._lock.__enter__()

    def __exit__(self, *args):
        return self._lock.__exit__(*args)

    def __repr__(self):
        return "<Condition(%s, %d)>" % (self._lock, len(self._waiters))

    def _release_save(self):
        self._lock.release()           # No state to save

    def _acquire_restore(self, x):
        self._lock.acquire()           # Ignore saved state

    def _is_owned(self):
        # Return True if lock is owned by current_thread.
        # This method is called only if _lock doesn't have _is_owned().
        if self._lock.acquire(0):
            self._lock.release()
            return False
        else:
            return True

    def wait(self, timeout=None):
        """Wait until notified or until a timeout occurs.

        If the calling thread has not acquired the lock when this method is
        called, a RuntimeError is raised.

        This method releases the underlying lock, and then blocks until it is
        awakened by a notify() or notify_all() call for the same condition
        variable in another thread, or until the optional timeout occurs. Once
        awakened or timed out, it re-acquires the lock and returns.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof).

        When the underlying lock is an RLock, it is not released using its
        release() method, since this may not actually unlock the lock when it
        was acquired multiple times recursively. Instead, an internal interface
        of the RLock class is used, which really unlocks it even when it has
        been recursively acquired several times. Another internal interface is
        then used to restore the recursion level when the lock is reacquired.

        """
        if not self._is_owned():
            raise RuntimeError("cannot wait on un-acquired lock")
        waiter = _allocate_lock()
        waiter.acquire()
        self._waiters.append(waiter)
        saved_state = self._release_save()
        gotit = False
        try:    # restore state no matter what (e.g., KeyboardInterrupt)
            if timeout is None:
                waiter.acquire()
                gotit = True
            else:
                if timeout > 0:
                    gotit = waiter.acquire(True, timeout)
                else:
                    gotit = waiter.acquire(False)
            return gotit
        finally:
            self._acquire_restore(saved_state)
            if not gotit:
                try:
                    self._waiters.remove(waiter)
                except ValueError:
                    pass

    def wait_for(self, predicate, timeout=None):
        """Wait until a condition evaluates to True.

        predicate should be a callable which result will be interpreted as a
        boolean value.  A timeout may be provided giving the maximum time to
        wait.

        """
        endtime = None
        waittime = timeout
        result = predicate()
        while not result:
            if waittime is not None:
                if endtime is None:
                    endtime = _time()   waittime
                else:
                    waittime = endtime - _time()
                    if waittime <= 0:
                        break
            self.wait(waittime)
            result = predicate()
        return result

    def notify(self, n=1):
        """Wake up one or more threads waiting on this condition, if any.

        If the calling thread has not acquired the lock when this method is
        called, a RuntimeError is raised.

        This method wakes up at most n of the threads waiting for the condition
        variable; it is a no-op if no threads are waiting.

        """
        if not self._is_owned():
            raise RuntimeError("cannot notify on un-acquired lock")
        all_waiters = self._waiters
        waiters_to_notify = _deque(_islice(all_waiters, n))
        if not waiters_to_notify:
            return
        for waiter in waiters_to_notify:
            waiter.release()
            try:
                all_waiters.remove(waiter)
            except ValueError:
                pass

    def notify_all(self):
        """Wake up all threads waiting on this condition.

        If the calling thread has not acquired the lock when this method
        is called, a RuntimeError is raised.

        """
        self.notify(len(self._waiters))

    notifyAll = notify_all


class Semaphore:
    """This class implements semaphore objects.

    Semaphores manage a counter representing the number of release() calls minus
    the number of acquire() calls, plus an initial value. The acquire() method
    blocks if necessary until it can return without making the counter
    negative. If not given, value defaults to 1.

    """

    # After Tim Peters' semaphore class, but not quite the same (no maximum)

    def __init__(self, value=1):
        if value < 0:
            raise ValueError("semaphore initial value must be >= 0")
        self._cond = Condition(Lock())
        self._value = value

    def acquire(self, blocking=True, timeout=None):
        """Acquire a semaphore, decrementing the internal counter by one.

        When invoked without arguments: if the internal counter is larger than
        zero on entry, decrement it by one and return immediately. If it is zero
        on entry, block, waiting until some other thread has called release() to
        make it larger than zero. This is done with proper interlocking so that
        if multiple acquire() calls are blocked, release() will wake exactly one
        of them up. The implementation may pick one at random, so the order in
        which blocked threads are awakened should not be relied on. There is no
        return value in this case.

        When invoked with blocking set to true, do the same thing as when called
        without arguments, and return true.

        When invoked with blocking set to false, do not block. If a call without
        an argument would block, return false immediately; otherwise, do the
        same thing as when called without arguments, and return true.

        When invoked with a timeout other than None, it will block for at
        most timeout seconds.  If acquire does not complete successfully in
        that interval, return false.  Return true otherwise.

        """
        if not blocking and timeout is not None:
            raise ValueError("can't specify timeout for non-blocking acquire")
        rc = False
        endtime = None
        with self._cond:
            while self._value == 0:
                if not blocking:
                    break
                if timeout is not None:
                    if endtime is None:
                        endtime = _time()   timeout
                    else:
                        timeout = endtime - _time()
                        if timeout <= 0:
                            break
                self._cond.wait(timeout)
            else:
                self._value -= 1
                rc = True
        return rc

    __enter__ = acquire

    def release(self):
        """Release a semaphore, incrementing the internal counter by one.

        When the counter is zero on entry and another thread is waiting for it
        to become larger than zero again, wake up that thread.

        """
        with self._cond:
            self._value  = 1
            self._cond.notify()

    def __exit__(self, t, v, tb):
        self.release()


class BoundedSemaphore(Semaphore):
    """Implements a bounded semaphore.

    A bounded semaphore checks to make sure its current value doesn't exceed its
    initial value. If it does, ValueError is raised. In most situations
    semaphores are used to guard resources with limited capacity.

    If the semaphore is released too many times it's a sign of a bug. If not
    given, value defaults to 1.

    Like regular semaphores, bounded semaphores manage a counter representing
    the number of release() calls minus the number of acquire() calls, plus an
    initial value. The acquire() method blocks if necessary until it can return
    without making the counter negative. If not given, value defaults to 1.

    """

    def __init__(self, value=1):
        Semaphore.__init__(self, value)
        self._initial_value = value

    def release(self):
        """Release a semaphore, incrementing the internal counter by one.

        When the counter is zero on entry and another thread is waiting for it
        to become larger than zero again, wake up that thread.

        If the number of releases exceeds the number of acquires,
        raise a ValueError.

        """
        with self._cond:
            if self._value >= self._initial_value:
                raise ValueError("Semaphore released too many times")
            self._value  = 1
            self._cond.notify()


class Event:
    """Class implementing event objects.

    Events manage a flag that can be set to true with the set() method and reset
    to false with the clear() method. The wait() method blocks until the flag is
    true.  The flag is initially false.

    """

    # After Tim Peters' event class (without is_posted())

    def __init__(self):
        self._cond = Condition(Lock())
        self._flag = False

    def _reset_internal_locks(self):
        # private!  called by Thread._reset_internal_locks by _after_fork()
        self._cond.__init__(Lock())

    def is_set(self):
        """Return true if and only if the internal flag is true."""
        return self._flag

    isSet = is_set

    def set(self):
        """Set the internal flag to true.

        All threads waiting for it to become true are awakened. Threads
        that call wait() once the flag is true will not block at all.

        """
        with self._cond:
            self._flag = True
            self._cond.notify_all()

    def clear(self):
        """Reset the internal flag to false.

        Subsequently, threads calling wait() will block until set() is called to
        set the internal flag to true again.

        """
        with self._cond:
            self._flag = False

    def wait(self, timeout=None):
        """Block until the internal flag is true.

        If the internal flag is true on entry, return immediately. Otherwise,
        block until another thread calls set() to set the flag to true, or until
        the optional timeout occurs.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof).

        This method returns the internal flag on exit, so it will always return
        True except if a timeout is given and the operation times out.

        """
        with self._cond:
            signaled = self._flag
            if not signaled:
                signaled = self._cond.wait(timeout)
            return signaled


# A barrier class.  Inspired in part by the pthread_barrier_* api and
# the CyclicBarrier class from Java.  See
# http://sourceware.org/pthreads-win32/manual/pthread_barrier_init.html and
# http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/
#        CyclicBarrier.html
# for information.
# We maintain two main states, 'filling' and 'draining' enabling the barrier
# to be cyclic.  Threads are not allowed into it until it has fully drained
# since the previous cycle.  In addition, a 'resetting' state exists which is
# similar to 'draining' except that threads leave with a BrokenBarrierError,
# and a 'broken' state in which all threads get the exception.
class Barrier:
    """Implements a Barrier.

    Useful for synchronizing a fixed number of threads at known synchronization
    points.  Threads block on 'wait()' and are simultaneously once they have all
    made that call.

    """

    def __init__(self, parties, action=None, timeout=None):
        """Create a barrier, initialised to 'parties' threads.

        'action' is a callable which, when supplied, will be called by one of
        the threads after they have all entered the barrier and just prior to
        releasing them all. If a 'timeout' is provided, it is uses as the
        default for all subsequent 'wait()' calls.

        """
        self._cond = Condition(Lock())
        self._action = action
        self._timeout = timeout
        self._parties = parties
        self._state = 0 #0 filling, 1, draining, -1 resetting, -2 broken
        self._count = 0

    def wait(self, timeout=None):
        """Wait for the barrier.

        When the specified number of threads have started waiting, they are all
        simultaneously awoken. If an 'action' was provided for the barrier, one
        of the threads will have executed that callback prior to returning.
        Returns an individual index number from 0 to 'parties-1'.

        """
        if timeout is None:
            timeout = self._timeout
        with self._cond:
            self._enter() # Block while the barrier drains.
            index = self._count
            self._count  = 1
            try:
                if index   1 == self._parties:
                    # We release the barrier
                    self._release()
                else:
                    # We wait until someone releases us
                    self._wait(timeout)
                return index
            finally:
                self._count -= 1
                # Wake up any threads waiting for barrier to drain.
                self._exit()

    # Block until the barrier is ready for us, or raise an exception
    # if it is broken.
    def _enter(self):
        while self._state in (-1, 1):
            # It is draining or resetting, wait until done
            self._cond.wait()
        #see if the barrier is in a broken state
        if self._state < 0:
            raise BrokenBarrierError
        assert self._state == 0

    # Optionally run the 'action' and release the threads waiting
    # in the barrier.
    def _release(self):
        try:
            if self._action:
                self._action()
            # enter draining state
            self._state = 1
            self._cond.notify_all()
        except:
            #an exception during the _action handler.  Break and reraise
            self._break()
            raise

    # Wait in the barrier until we are relased.  Raise an exception
    # if the barrier is reset or broken.
    def _wait(self, timeout):
        if not self._cond.wait_for(lambda : self._state != 0, timeout):
            #timed out.  Break the barrier
            self._break()
            raise BrokenBarrierError
        if self._state < 0:
            raise BrokenBarrierError
        assert self._state == 1

    # If we are the last thread to exit the barrier, signal any threads
    # waiting for the barrier to drain.
    def _exit(self):
        if self._count == 0:
            if self._state in (-1, 1):
                #resetting or draining
                self._state = 0
                self._cond.notify_all()

    def reset(self):
        """Reset the barrier to the initial state.

        Any threads currently waiting will get the BrokenBarrier exception
        raised.

        """
        with self._cond:
            if self._count > 0:
                if self._state == 0:
                    #reset the barrier, waking up threads
                    self._state = -1
                elif self._state == -2:
                    #was broken, set it to reset state
                    #which clears when the last thread exits
                    self._state = -1
            else:
                self._state = 0
            self._cond.notify_all()

    def abort(self):
        """Place the barrier into a 'broken' state.

        Useful in case of error.  Any currently waiting threads and threads
        attempting to 'wait()' will have BrokenBarrierError raised.

        """
        with self._cond:
            self._break()

    def _break(self):
        # An internal error was detected.  The barrier is set to
        # a broken state all parties awakened.
        self._state = -2
        self._cond.notify_all()

    @property
    def parties(self):
        """Return the number of threads required to trip the barrier."""
        return self._parties

    @property
    def n_waiting(self):
        """Return the number of threads currently waiting at the barrier."""
        # We don't need synchronization here since this is an ephemeral result
        # anyway.  It returns the correct value in the steady state.
        if self._state == 0:
            return self._count
        return 0

    @property
    def broken(self):
        """Return True if the barrier is in a broken state."""
        return self._state == -2

# exception raised by the Barrier class
class BrokenBarrierError(RuntimeError):
    pass


# Helper to generate new thread names
_counter = _count().__next__
_counter() # Consume 0 so first non-main thread has id 1.
def _newname(template="Thread-%d"):
    return template % _counter()

# Active thread administration
_active_limbo_lock = _allocate_lock()
_active = {}    # maps thread id to Thread object
_limbo = {}
_dangling = WeakSet()

# Main class for threads

class Thread:
    """A class that represents a thread of control.

    This class can be safely subclassed in a limited fashion. There are two ways
    to specify the activity: by passing a callable object to the constructor, or
    by overriding the run() method in a subclass.

    """

    _initialized = False
    # Need to store a reference to sys.exc_info for printing
    # out exceptions when a thread tries to use a global var. during interp.
    # shutdown and thus raises an exception about trying to perform some
    # operation on/with a NoneType
    _exc_info = _sys.exc_info
    # Keep sys.exc_clear too to clear the exception just before
    # allowing .join() to return.
    #XXX __exc_clear = _sys.exc_clear

    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs=None, *, daemon=None):
        """This constructor should always be called with keyword arguments. Arguments are:

        *group* should be None; reserved for future extension when a ThreadGroup
        class is implemented.

        *target* is the callable object to be invoked by the run()
        method. Defaults to None, meaning nothing is called.

        *name* is the thread name. By default, a unique name is constructed of
        the form "Thread-N" where N is a small decimal number.

        *args* is the argument tuple for the target invocation. Defaults to ().

        *kwargs* is a dictionary of keyword arguments for the target
        invocation. Defaults to {}.

        If a subclass overrides the constructor, it must make sure to invoke
        the base class constructor (Thread.__init__()) before doing anything
        else to the thread.

        """
        assert group is None, "group argument must be None for now"
        if kwargs is None:
            kwargs = {}
        self._target = target
        self._name = str(name or _newname())
        self._args = args
        self._kwargs = kwargs
        if daemon is not None:
            self._daemonic = daemon
        else:
            self._daemonic = current_thread().daemon
        self._ident = None
        self._tstate_lock = None
        self._started = Event()
        self._is_stopped = False
        self._initialized = True
        # sys.stderr is not stored in the class like
        # sys.exc_info since it can be changed between instances
        self._stderr = _sys.stderr
        # For debugging and _after_fork()
        _dangling.add(self)

    def _reset_internal_locks(self, is_alive):
        # private!  Called by _after_fork() to reset our internal locks as
        # they may be in an invalid state leading to a deadlock or crash.
        self._started._reset_internal_locks()
        if is_alive:
            self._set_tstate_lock()
        else:
            # The thread isn't alive after fork: it doesn't have a tstate
            # anymore.
            self._is_stopped = True
            self._tstate_lock = None

    def __repr__(self):
        assert self._initialized, "Thread.__init__() was not called"
        status = "initial"
        if self._started.is_set():
            status = "started"
        self.is_alive() # easy way to get ._is_stopped set when appropriate
        if self._is_stopped:
            status = "stopped"
        if self._daemonic:
            status  = " daemon"
        if self._ident is not None:
            status  = " %s" % self._ident
        return "<%s(%s, %s)>" % (self.__class__.__name__, self._name, status)

    def start(self):
        """Start the thread's activity.

        It must be called at most once per thread object. It arranges for the
        object's run() method to be invoked in a separate thread of control.

        This method will raise a RuntimeError if called more than once on the
        same thread object.

        """
        if not self._initialized:
            raise RuntimeError("thread.__init__() not called")

        if self._started.is_set():
            raise RuntimeError("threads can only be started once")
        with _active_limbo_lock:
            _limbo[self] = self
        try:
            _start_new_thread(self._bootstrap, ())
        except Exception:
            with _active_limbo_lock:
                del _limbo[self]
            raise
        self._started.wait()

    def run(self):
        """Method representing the thread's activity.

        You may override this method in a subclass. The standard run() method
        invokes the callable object passed to the object's constructor as the
        target argument, if any, with sequential and keyword arguments taken
        from the args and kwargs arguments, respectively.

        """
        try:
            if self._target:
                self._target(*self._args, **self._kwargs)
        finally:
            # Avoid a refcycle if the thread is running a function with
            # an argument that has a member that points to the thread.
            del self._target, self._args, self._kwargs

    def _bootstrap(self):
        # Wrapper around the real bootstrap code that ignores
        # exceptions during interpreter cleanup.  Those typically
        # happen when a daemon thread wakes up at an unfortunate
        # moment, finds the world around it destroyed, and raises some
        # random exception *** while trying to report the exception in
        # _bootstrap_inner() below ***.  Those random exceptions
        # don't help anybody, and they confuse users, so we suppress
        # them.  We suppress them only when it appears that the world
        # indeed has already been destroyed, so that exceptions in
        # _bootstrap_inner() during normal business hours are properly
        # reported.  Also, we only suppress them for daemonic threads;
        # if a non-daemonic encounters this, something else is wrong.
        try:
            self._bootstrap_inner()
        except:
            if self._daemonic and _sys is None:
                return
            raise

    def _set_ident(self):
        self._ident = get_ident()

    def _set_tstate_lock(self):
        """
        Set a lock object which will be released by the interpreter when
        the underlying thread state (see pystate.h) gets deleted.
        """
        self._tstate_lock = _set_sentinel()
        self._tstate_lock.acquire()

    def _bootstrap_inner(self):
        try:
            self._set_ident()
            self._set_tstate_lock()
            self._started.set()
            with _active_limbo_lock:
                _active[self._ident] = self
                del _limbo[self]

            if _trace_hook:
                _sys.settrace(_trace_hook)
            if _profile_hook:
                _sys.setprofile(_profile_hook)

            try:
                self.run()
            except SystemExit:
                pass
            except:
                # If sys.stderr is no more (most likely from interpreter
                # shutdown) use self._stderr.  Otherwise still use sys (as in
                # _sys) in case sys.stderr was redefined since the creation of
                # self.
                if _sys and _sys.stderr is not None:
                    print("Exception in thread %s:n%s" %
                          (self.name, _format_exc()), file=_sys.stderr)
                elif self._stderr is not None:
                    # Do the best job possible w/o a huge amt. of code to
                    # approximate a traceback (code ideas from
                    # Lib/traceback.py)
                    exc_type, exc_value, exc_tb = self._exc_info()
                    try:
                        print((
                            "Exception in thread "   self.name  
                            " (most likely raised during interpreter shutdown):"), file=self._stderr)
                        print((
                            "Traceback (most recent call last):"), file=self._stderr)
                        while exc_tb:
                            print((
                                '  File "%s", line %s, in %s' %
                                (exc_tb.tb_frame.f_code.co_filename,
                                    exc_tb.tb_lineno,
                                    exc_tb.tb_frame.f_code.co_name)), file=self._stderr)
                            exc_tb = exc_tb.tb_next
                        print(("%s: %s" % (exc_type, exc_value)), file=self._stderr)
                    # Make sure that exc_tb gets deleted since it is a memory
                    # hog; deleting everything else is just for thoroughness
                    finally:
                        del exc_type, exc_value, exc_tb
            finally:
                # Prevent a race in
                # test_threading.test_no_refcycle_through_target when
                # the exception keeps the target alive past when we
                # assert that it's dead.
                #XXX self._exc_clear()
                pass
        finally:
            with _active_limbo_lock:
                try:
                    # We don't call self._delete() because it also
                    # grabs _active_limbo_lock.
                    del _active[get_ident()]
                except:
                    pass

    def _stop(self):
        # After calling ._stop(), .is_alive() returns False and .join() returns
        # immediately.  ._tstate_lock must be released before calling ._stop().
        #
        # Normal case:  C code at the end of the thread's life
        # (release_sentinel in _threadmodule.c) releases ._tstate_lock, and
        # that's detected by our ._wait_for_tstate_lock(), called by .join()
        # and .is_alive().  Any number of threads _may_ call ._stop()
        # simultaneously (for example, if multiple threads are blocked in
        # .join() calls), and they're not serialized.  That's harmless -
        # they'll just make redundant rebindings of ._is_stopped and
        # ._tstate_lock.  Obscure:  we rebind ._tstate_lock last so that the
        # "assert self._is_stopped" in ._wait_for_tstate_lock() always works
        # (the assert is executed only if ._tstate_lock is None).
        #
        # Special case:  _main_thread releases ._tstate_lock via this
        # module's _shutdown() function.
        lock = self._tstate_lock
        if lock is not None:
            assert not lock.locked()
        self._is_stopped = True
        self._tstate_lock = None

    def _delete(self):
        "Remove current thread from the dict of currently running threads."

        # Notes about running with _dummy_thread:
        #
        # Must take care to not raise an exception if _dummy_thread is being
        # used (and thus this module is being used as an instance of
        # dummy_threading).  _dummy_thread.get_ident() always returns -1 since
        # there is only one thread if _dummy_thread is being used.  Thus
        # len(_active) is always <= 1 here, and any Thread instance created
        # overwrites the (if any) thread currently registered in _active.
        #
        # An instance of _MainThread is always created by 'threading'.  This
        # gets overwritten the instant an instance of Thread is created; both
        # threads return -1 from _dummy_thread.get_ident() and thus have the
        # same key in the dict.  So when the _MainThread instance created by
        # 'threading' tries to clean itself up when atexit calls this method
        # it gets a KeyError if another Thread instance was created.
        #
        # This all means that KeyError from trying to delete something from
        # _active if dummy_threading is being used is a red herring.  But
        # since it isn't if dummy_threading is *not* being used then don't
        # hide the exception.

        try:
            with _active_limbo_lock:
                del _active[get_ident()]
                # There must not be any python code between the previous line
                # and after the lock is released.  Otherwise a tracing function
                # could try to acquire the lock again in the same thread, (in
                # current_thread()), and would block.
        except KeyError:
            if 'dummy_threading' not in _sys.modules:
                raise

    def join(self, timeout=None):
        """Wait until the thread terminates.

        This blocks the calling thread until the thread whose join() method is
        called terminates -- either normally or through an unhandled exception
        or until the optional timeout occurs.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof). As join() always returns None, you must call
        isAlive() after join() to decide whether a timeout happened -- if the
        thread is still alive, the join() call timed out.

        When the timeout argument is not present or None, the operation will
        block until the thread terminates.

        A thread can be join()ed many times.

        join() raises a RuntimeError if an attempt is made to join the current
        thread as that would cause a deadlock. It is also an error to join() a
        thread before it has been started and attempts to do so raises the same
        exception.

        """
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if not self._started.is_set():
            raise RuntimeError("cannot join thread before it is started")
        if self is current_thread():
            raise RuntimeError("cannot join current thread")

        if timeout is None:
            self._wait_for_tstate_lock()
        else:
            # the behavior of a negative timeout isn't documented, but
            # historically .join(timeout=x) for x<0 has acted as if timeout=0
            self._wait_for_tstate_lock(timeout=max(timeout, 0))

    def _wait_for_tstate_lock(self, block=True, timeout=-1):
        # Issue #18808: wait for the thread state to be gone.
        # At the end of the thread's life, after all knowledge of the thread
        # is removed from C data structures, C code releases our _tstate_lock.
        # This method passes its arguments to _tstate_lock.acquire().
        # If the lock is acquired, the C code is done, and self._stop() is
        # called.  That sets ._is_stopped to True, and ._tstate_lock to None.
        lock = self._tstate_lock
        if lock is None:  # already determined that the C code is done
            assert self._is_stopped
        elif lock.acquire(block, timeout):
            lock.release()
            self._stop()

    @property
    def name(self):
        """A string used for identification purposes only.

        It has no semantics. Multiple threads may be given the same name. The
        initial name is set by the constructor.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._name

    @name.setter
    def name(self, name):
        assert self._initialized, "Thread.__init__() not called"
        self._name = str(name)

    @property
    def ident(self):
        """Thread identifier of this thread or None if it has not been started.

        This is a nonzero integer. See the thread.get_ident() function. Thread
        identifiers may be recycled when a thread exits and another thread is
        created. The identifier is available even after the thread has exited.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._ident

    def is_alive(self):
        """Return whether the thread is alive.

        This method returns True just before the run() method starts until just
        after the run() method terminates. The module function enumerate()
        returns a list of all alive threads.

        """
        assert self._initialized, "Thread.__init__() not called"
        if self._is_stopped or not self._started.is_set():
            return False
        self._wait_for_tstate_lock(False)
        return not self._is_stopped

    isAlive = is_alive

    @property
    def daemon(self):
        """A boolean value indicating whether this thread is a daemon thread.

        This must be set before start() is called, otherwise RuntimeError is
        raised. Its initial value is inherited from the creating thread; the
        main thread is not a daemon thread and therefore all threads created in
        the main thread default to daemon = False.

        The entire Python program exits when no alive non-daemon threads are
        left.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._daemonic

    @daemon.setter
    def daemon(self, daemonic):
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if self._started.is_set():
            raise RuntimeError("cannot set daemon status of active thread")
        self._daemonic = daemonic

    def isDaemon(self):
        return self.daemon

    def setDaemon(self, daemonic):
        self.daemon = daemonic

    def getName(self):
        return self.name

    def setName(self, name):
        self.name = name

# The timer class was contributed by Itamar Shtull-Trauring

class Timer(Thread):
    """Call a function after a specified number of seconds:

            t = Timer(30.0, f, args=None, kwargs=None)
            t.start()
            t.cancel()     # stop the timer's action if it's still waiting

    """

    def __init__(self, interval, function, args=None, kwargs=None):
        Thread.__init__(self)
        self.interval = interval
        self.function = function
        self.args = args if args is not None else []
        self.kwargs = kwargs if kwargs is not None else {}
        self.finished = Event()

    def cancel(self):
        """Stop the timer if it hasn't finished yet."""
        self.finished.set()

    def run(self):
        self.finished.wait(self.interval)
        if not self.finished.is_set():
            self.function(*self.args, **self.kwargs)
        self.finished.set()

# Special thread class to represent the main thread
# This is garbage collected through an exit handler

class _MainThread(Thread):

    def __init__(self):
        Thread.__init__(self, name="MainThread", daemon=False)
        self._set_tstate_lock()
        self._started.set()
        self._set_ident()
        with _active_limbo_lock:
            _active[self._ident] = self


# Dummy thread class to represent threads not started here.
# These aren't garbage collected when they die, nor can they be waited for.
# If they invoke anything in threading.py that calls current_thread(), they
# leave an entry in the _active dict forever after.
# Their purpose is to return *something* from current_thread().
# They are marked as daemon threads so we won't wait for them
# when we exit (conform previous semantics).

class _DummyThread(Thread):

    def __init__(self):
        Thread.__init__(self, name=_newname("Dummy-%d"), daemon=True)

        self._started.set()
        self._set_ident()
        with _active_limbo_lock:
            _active[self._ident] = self

    def _stop(self):
        pass

    def join(self, timeout=None):
        assert False, "cannot join a dummy thread"


# Global API functions

def current_thread():
    """Return the current Thread object, corresponding to the caller's thread of control.

    If the caller's thread of control was not created through the threading
    module, a dummy thread object with limited functionality is returned.

    """
    try:
        return _active[get_ident()]
    except KeyError:
        return _DummyThread()

currentThread = current_thread

def active_count():
    """Return the number of Thread objects currently alive.

    The returned count is equal to the length of the list returned by
    enumerate().

    """
    with _active_limbo_lock:
        return len(_active)   len(_limbo)

activeCount = active_count

def _enumerate():
    # Same as enumerate(), but without the lock. Internal use only.
    return list(_active.values())   list(_limbo.values())

def enumerate():
    """Return a list of all Thread objects currently alive.

    The list includes daemonic threads, dummy thread objects created by
    current_thread(), and the main thread. It excludes terminated threads and
    threads that have not yet been started.

    """
    with _active_limbo_lock:
        return list(_active.values())   list(_limbo.values())

from _thread import stack_size

# Create the main thread object,
# and make it available for the interpreter
# (Py_Main) as threading._shutdown.

_main_thread = _MainThread()

def _shutdown():
    # Obscure:  other threads may be waiting to join _main_thread.  That's
    # dubious, but some code does it.  We can't wait for C code to release
    # the main thread's tstate_lock - that won't happen until the interpreter
    # is nearly dead.  So we release it here.  Note that just calling _stop()
    # isn't enough:  other threads may already be waiting on _tstate_lock.
    tlock = _main_thread._tstate_lock
    # The main thread isn't finished yet, so its thread state lock can't have
    # been released.
    assert tlock is not None
    assert tlock.locked()
    tlock.release()
    _main_thread._stop()
    t = _pickSomeNonDaemonThread()
    while t:
        t.join()
        t = _pickSomeNonDaemonThread()
    _main_thread._delete()

def _pickSomeNonDaemonThread():
    for t in enumerate():
        if not t.daemon and t.is_alive():
            return t
    return None

def main_thread():
    """Return the main thread object.

    In normal conditions, the main thread is the thread from which the
    Python interpreter was started.
    """
    return _main_thread

# get thread-local implementation, either from the thread
# module, or from the python fallback

try:
    from _thread import _local as local
except ImportError:
    from _threading_local import local


def _after_fork():
    # This function is called by Python/ceval.c:PyEval_ReInitThreads which
    # is called from PyOS_AfterFork.  Here we cleanup threading module state
    # that should not exist after a fork.

    # Reset _active_limbo_lock, in case we forked while the lock was held
    # by another (non-forked) thread.  http://bugs.python.org/issue874900
    global _active_limbo_lock, _main_thread
    _active_limbo_lock = _allocate_lock()

    # fork() only copied the current thread; clear references to others.
    new_active = {}
    current = current_thread()
    _main_thread = current
    with _active_limbo_lock:
        # Dangling thread instances must still have their locks reset,
        # because someone may join() them.
        threads = set(_enumerate())
        threads.update(_dangling)
        for thread in threads:
            # Any lock/condition variable may be currently locked or in an
            # invalid state, so we reinitialize them.
            if thread is current:
                # There is only one active thread. We reset the ident to
                # its new value since it can have changed.
                thread._reset_internal_locks(True)
                ident = get_ident()
                thread._ident = ident
                new_active[ident] = thread
            else:
                # All the others are already stopped.
                thread._reset_internal_locks(False)
                thread._stop()

        _limbo.clear()
        _active.clear()
        _active.update(new_active)
        assert len(_active) == 1

信号量

 

__author__ = "Narwhale"

import threading,time

def run(n):
    semaphore.acquire()
    time.sleep(1)
    print('线程%s在跑!'%n)
    semaphore.release()

if __name__ == '__main__':
    semaphore = threading.BoundedSemaphore(5)      #最多5个线程同时跑
    for i in range(20):
        t = threading.Thread(target=run,args=(i,))
        t.start()

while threading.active_count() !=1:
    pass
else:
    print('所有线程跑完了!')

    线程实例:

临蓐者消费者模型

    Python threading模块

__author__ = "Narwhale"
import queue,time,threading
q = queue.Queue(10)

def producer(name):
    count = 0
    while True:
        print('%s生产了包子%s'%(name,count))
        q.put('包子%s'%count)
        count  = 1
        time.sleep(1)

def consumer(name):
    while True:
        print('%s取走了%s,并且吃了它。。。。。'%(name,q.get()))
        time.sleep(1)


A1 = threading.Thread(target=producer,args=('A1',))
A1.start()

B1 = threading.Thread(target=consumer,args=('B1',))
B1.start()
B2 = threading.Thread(target=consumer,args=('B2',))
B2.start()

    线程有2种调用方式,如下:

红绿灯

    直白调用

__author__ = "Narwhale"

import threading,time

event = threading.Event()

def light():
    event.set()
    count = 0
    while True:
        if count >5 and count < 10:
            event.clear()
            print('\033[41;1m红灯亮了\033[0m' )
        elif count > 10:
            event.set()
            count = 0
        else:
            print('\033[42;1m绿灯亮了\033[0m')
        time.sleep(1)
        count  =1


def car(n):
    while True:
        if event.isSet():
            print('\033[34;1m%s车正在跑!\033[0m'%n)
            time.sleep(1)
        else:
            print('车停下来了')
            event.wait()

light = threading.Thread(target=light,args=( ))
light.start()
car1 = threading.Thread(target=car,args=('Tesla',))
car1.start()

 

 

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)


if __name__ == "__main__":
    start_time = time.time()
    t1 = threading.Thread(target=func,args=(6,))
    t2 = threading.Thread(target=func,args=(9,))
    t1.start()
    t2.start()
    end_time = time.time()
    run_time = end_time-start_time
    print("\033[34;1m程序运行时间:\033[0m",run_time)


    time1 = time.time()
    func(6)
    func(9)
    time2 = time.time()
    run_time2 = time2 - time1
    print("\033[32m直接执行需要时间:\033[0m",run_time2)
执行结果如下:
The lucky num is  6
The lucky num is  9
程序运行时间: 0.00044083595275878906
The lucky num is  6
The lucky num is  9
直接执行需要时间: 4.002933979034424

 

    从上面代码能够看见,我们采纳的是线程,threading.Thread,线程里面target=func(函数名卡塔尔国,args=(参数,卡塔尔,能够见见,线程的进程极快,运营四个线程实践须要中间异常的短,不过那只是运营线程的时间,IO操作实际并从未实行,当时,程序还没进行实现,可是线程是不管的,直接会向下实践,而串行的主次则差别样,生龙活虎行大器晚成行试行,由此运营的年华就算增大的。

    所以上边,第多少个小时只是线程运维进程中费用的光阴,并不曾算IO操作的光阴,IO操作等待的时候,线程会向下奉行,不会等待程序推行,接着往下运维,唯有等到下边也许有IO操作的时候,才会看下面是或不是进行完结,上边线程施行完结则打字与印刷,不过无论怎样,最后都会等待程序施行完结,然后才停止程序。

    世襲式调用

 

 

import threading,time

class MyThreading(threading.Thread):
    '''定义一个线程类'''
    def __init__(self,num):                       #初始化子类
        super(MyThreading,self).__init__()        #由于是继承父类threading.Thread,要重写父类,没有继承参数super(子类,self).__init__(继承父类参数)
        self.num = num

    def run(self):
        print("The lucky num is",self.num)
        time.sleep(2)
        print("使用类启动线程,本局执行在什么时候!")

if __name__ == "__main__":
    start_time1 = time.time()
    t1 = MyThreading(6)
    t2 = MyThreading(9)
    t1.start()
    t2.start()
    end_time1 = time.time()
    run_time1 = end_time1 - start_time1
    print("线程运行时间:",run_time1)

    start_time2 = time.time()
    t1.run()
    t2.run()
    end_time2 = time.time()
    run_time2 = end_time2 - start_time2
    print("串行程序执行时间:",run_time2)
执行结果如下:
The lucky num is 6
The lucky num is 9
线程运行时间: 0.0004470348358154297
The lucky num is 6
使用类启动线程,本局执行在什么时候!
使用类启动线程,本局执行在什么时候!
使用类启动线程,本局执行在什么时候!
The lucky num is 9
使用类启动线程,本局执行在什么时候!
串行程序执行时间: 4.004571914672852

 

    下面程序是用类写的线程,上边线程是继续threading里面包车型客车类Thread,

    threading.Thread源代码:

class Thread:
    """A class that represents a thread of control.

    This class can be safely subclassed in a limited fashion. There are two ways
    to specify the activity: by passing a callable object to the constructor, or
    by overriding the run() method in a subclass.

    """

    _initialized = False
    # Need to store a reference to sys.exc_info for printing
    # out exceptions when a thread tries to use a global var. during interp.
    # shutdown and thus raises an exception about trying to perform some
    # operation on/with a NoneType
    _exc_info = _sys.exc_info
    # Keep sys.exc_clear too to clear the exception just before
    # allowing .join() to return.
    #XXX __exc_clear = _sys.exc_clear

    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs=None, *, daemon=None):
        """This constructor should always be called with keyword arguments. Arguments are:

        *group* should be None; reserved for future extension when a ThreadGroup
        class is implemented.

        *target* is the callable object to be invoked by the run()
        method. Defaults to None, meaning nothing is called.

        *name* is the thread name. By default, a unique name is constructed of
        the form "Thread-N" where N is a small decimal number.

        *args* is the argument tuple for the target invocation. Defaults to ().

        *kwargs* is a dictionary of keyword arguments for the target
        invocation. Defaults to {}.

        If a subclass overrides the constructor, it must make sure to invoke
        the base class constructor (Thread.__init__()) before doing anything
        else to the thread.

        """
        assert group is None, "group argument must be None for now"
        if kwargs is None:
            kwargs = {}
        self._target = target
        self._name = str(name or _newname())
        self._args = args
        self._kwargs = kwargs
        if daemon is not None:
            self._daemonic = daemon
        else:
            self._daemonic = current_thread().daemon
        self._ident = None
        self._tstate_lock = None
        self._started = Event()
        self._is_stopped = False
        self._initialized = True
        # sys.stderr is not stored in the class like
        # sys.exc_info since it can be changed between instances
        self._stderr = _sys.stderr
        # For debugging and _after_fork()
        _dangling.add(self)

    def _reset_internal_locks(self, is_alive):
        # private!  Called by _after_fork() to reset our internal locks as
        # they may be in an invalid state leading to a deadlock or crash.
        self._started._reset_internal_locks()
        if is_alive:
            self._set_tstate_lock()
        else:
            # The thread isn't alive after fork: it doesn't have a tstate
            # anymore.
            self._is_stopped = True
            self._tstate_lock = None

    def __repr__(self):
        assert self._initialized, "Thread.__init__() was not called"
        status = "initial"
        if self._started.is_set():
            status = "started"
        self.is_alive() # easy way to get ._is_stopped set when appropriate
        if self._is_stopped:
            status = "stopped"
        if self._daemonic:
            status  = " daemon"
        if self._ident is not None:
            status  = " %s" % self._ident
        return "<%s(%s, %s)>" % (self.__class__.__name__, self._name, status)

    def start(self):
        """Start the thread's activity.

        It must be called at most once per thread object. It arranges for the
        object's run() method to be invoked in a separate thread of control.

        This method will raise a RuntimeError if called more than once on the
        same thread object.

        """
        if not self._initialized:
            raise RuntimeError("thread.__init__() not called")

        if self._started.is_set():
            raise RuntimeError("threads can only be started once")
        with _active_limbo_lock:
            _limbo[self] = self
        try:
            _start_new_thread(self._bootstrap, ())
        except Exception:
            with _active_limbo_lock:
                del _limbo[self]
            raise
        self._started.wait()

    def run(self):
        """Method representing the thread's activity.

        You may override this method in a subclass. The standard run() method
        invokes the callable object passed to the object's constructor as the
        target argument, if any, with sequential and keyword arguments taken
        from the args and kwargs arguments, respectively.

        """
        try:
            if self._target:
                self._target(*self._args, **self._kwargs)
        finally:
            # Avoid a refcycle if the thread is running a function with
            # an argument that has a member that points to the thread.
            del self._target, self._args, self._kwargs

    def _bootstrap(self):
        # Wrapper around the real bootstrap code that ignores
        # exceptions during interpreter cleanup.  Those typically
        # happen when a daemon thread wakes up at an unfortunate
        # moment, finds the world around it destroyed, and raises some
        # random exception *** while trying to report the exception in
        # _bootstrap_inner() below ***.  Those random exceptions
        # don't help anybody, and they confuse users, so we suppress
        # them.  We suppress them only when it appears that the world
        # indeed has already been destroyed, so that exceptions in
        # _bootstrap_inner() during normal business hours are properly
        # reported.  Also, we only suppress them for daemonic threads;
        # if a non-daemonic encounters this, something else is wrong.
        try:
            self._bootstrap_inner()
        except:
            if self._daemonic and _sys is None:
                return
            raise

    def _set_ident(self):
        self._ident = get_ident()

    def _set_tstate_lock(self):
        """
        Set a lock object which will be released by the interpreter when
        the underlying thread state (see pystate.h) gets deleted.
        """
        self._tstate_lock = _set_sentinel()
        self._tstate_lock.acquire()

    def _bootstrap_inner(self):
        try:
            self._set_ident()
            self._set_tstate_lock()
            self._started.set()
            with _active_limbo_lock:
                _active[self._ident] = self
                del _limbo[self]

            if _trace_hook:
                _sys.settrace(_trace_hook)
            if _profile_hook:
                _sys.setprofile(_profile_hook)

            try:
                self.run()
            except SystemExit:
                pass
            except:
                # If sys.stderr is no more (most likely from interpreter
                # shutdown) use self._stderr.  Otherwise still use sys (as in
                # _sys) in case sys.stderr was redefined since the creation of
                # self.
                if _sys and _sys.stderr is not None:
                    print("Exception in thread %s:n%s" %
                          (self.name, _format_exc()), file=_sys.stderr)
                elif self._stderr is not None:
                    # Do the best job possible w/o a huge amt. of code to
                    # approximate a traceback (code ideas from
                    # Lib/traceback.py)
                    exc_type, exc_value, exc_tb = self._exc_info()
                    try:
                        print((
                            "Exception in thread "   self.name  
                            " (most likely raised during interpreter shutdown):"), file=self._stderr)
                        print((
                            "Traceback (most recent call last):"), file=self._stderr)
                        while exc_tb:
                            print((
                                '  File "%s", line %s, in %s' %
                                (exc_tb.tb_frame.f_code.co_filename,
                                    exc_tb.tb_lineno,
                                    exc_tb.tb_frame.f_code.co_name)), file=self._stderr)
                            exc_tb = exc_tb.tb_next
                        print(("%s: %s" % (exc_type, exc_value)), file=self._stderr)
                    # Make sure that exc_tb gets deleted since it is a memory
                    # hog; deleting everything else is just for thoroughness
                    finally:
                        del exc_type, exc_value, exc_tb
            finally:
                # Prevent a race in
                # test_threading.test_no_refcycle_through_target when
                # the exception keeps the target alive past when we
                # assert that it's dead.
                #XXX self._exc_clear()
                pass
        finally:
            with _active_limbo_lock:
                try:
                    # We don't call self._delete() because it also
                    # grabs _active_limbo_lock.
                    del _active[get_ident()]
                except:
                    pass

    def _stop(self):
        # After calling ._stop(), .is_alive() returns False and .join() returns
        # immediately.  ._tstate_lock must be released before calling ._stop().
        #
        # Normal case:  C code at the end of the thread's life
        # (release_sentinel in _threadmodule.c) releases ._tstate_lock, and
        # that's detected by our ._wait_for_tstate_lock(), called by .join()
        # and .is_alive().  Any number of threads _may_ call ._stop()
        # simultaneously (for example, if multiple threads are blocked in
        # .join() calls), and they're not serialized.  That's harmless -
        # they'll just make redundant rebindings of ._is_stopped and
        # ._tstate_lock.  Obscure:  we rebind ._tstate_lock last so that the
        # "assert self._is_stopped" in ._wait_for_tstate_lock() always works
        # (the assert is executed only if ._tstate_lock is None).
        #
        # Special case:  _main_thread releases ._tstate_lock via this
        # module's _shutdown() function.
        lock = self._tstate_lock
        if lock is not None:
            assert not lock.locked()
        self._is_stopped = True
        self._tstate_lock = None

    def _delete(self):
        "Remove current thread from the dict of currently running threads."

        # Notes about running with _dummy_thread:
        #
        # Must take care to not raise an exception if _dummy_thread is being
        # used (and thus this module is being used as an instance of
        # dummy_threading).  _dummy_thread.get_ident() always returns -1 since
        # there is only one thread if _dummy_thread is being used.  Thus
        # len(_active) is always <= 1 here, and any Thread instance created
        # overwrites the (if any) thread currently registered in _active.
        #
        # An instance of _MainThread is always created by 'threading'.  This
        # gets overwritten the instant an instance of Thread is created; both
        # threads return -1 from _dummy_thread.get_ident() and thus have the
        # same key in the dict.  So when the _MainThread instance created by
        # 'threading' tries to clean itself up when atexit calls this method
        # it gets a KeyError if another Thread instance was created.
        #
        # This all means that KeyError from trying to delete something from
        # _active if dummy_threading is being used is a red herring.  But
        # since it isn't if dummy_threading is *not* being used then don't
        # hide the exception.

        try:
            with _active_limbo_lock:
                del _active[get_ident()]
                # There must not be any python code between the previous line
                # and after the lock is released.  Otherwise a tracing function
                # could try to acquire the lock again in the same thread, (in
                # current_thread()), and would block.
        except KeyError:
            if 'dummy_threading' not in _sys.modules:
                raise

    def join(self, timeout=None):
        """Wait until the thread terminates.

        This blocks the calling thread until the thread whose join() method is
        called terminates -- either normally or through an unhandled exception
        or until the optional timeout occurs.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof). As join() always returns None, you must call
        isAlive() after join() to decide whether a timeout happened -- if the
        thread is still alive, the join() call timed out.

        When the timeout argument is not present or None, the operation will
        block until the thread terminates.

        A thread can be join()ed many times.

        join() raises a RuntimeError if an attempt is made to join the current
        thread as that would cause a deadlock. It is also an error to join() a
        thread before it has been started and attempts to do so raises the same
        exception.

        """
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if not self._started.is_set():
            raise RuntimeError("cannot join thread before it is started")
        if self is current_thread():
            raise RuntimeError("cannot join current thread")

        if timeout is None:
            self._wait_for_tstate_lock()
        else:
            # the behavior of a negative timeout isn't documented, but
            # historically .join(timeout=x) for x<0 has acted as if timeout=0
            self._wait_for_tstate_lock(timeout=max(timeout, 0))

    def _wait_for_tstate_lock(self, block=True, timeout=-1):
        # Issue #18808: wait for the thread state to be gone.
        # At the end of the thread's life, after all knowledge of the thread
        # is removed from C data structures, C code releases our _tstate_lock.
        # This method passes its arguments to _tstate_lock.acquire().
        # If the lock is acquired, the C code is done, and self._stop() is
        # called.  That sets ._is_stopped to True, and ._tstate_lock to None.
        lock = self._tstate_lock
        if lock is None:  # already determined that the C code is done
            assert self._is_stopped
        elif lock.acquire(block, timeout):
            lock.release()
            self._stop()

    @property
    def name(self):
        """A string used for identification purposes only.

        It has no semantics. Multiple threads may be given the same name. The
        initial name is set by the constructor.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._name

    @name.setter
    def name(self, name):
        assert self._initialized, "Thread.__init__() not called"
        self._name = str(name)

    @property
    def ident(self):
        """Thread identifier of this thread or None if it has not been started.

        This is a nonzero integer. See the thread.get_ident() function. Thread
        identifiers may be recycled when a thread exits and another thread is
        created. The identifier is available even after the thread has exited.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._ident

    def is_alive(self):
        """Return whether the thread is alive.

        This method returns True just before the run() method starts until just
        after the run() method terminates. The module function enumerate()
        returns a list of all alive threads.

        """
        assert self._initialized, "Thread.__init__() not called"
        if self._is_stopped or not self._started.is_set():
            return False
        self._wait_for_tstate_lock(False)
        return not self._is_stopped

    isAlive = is_alive

    @property
    def daemon(self):
        """A boolean value indicating whether this thread is a daemon thread.

        This must be set before start() is called, otherwise RuntimeError is
        raised. Its initial value is inherited from the creating thread; the
        main thread is not a daemon thread and therefore all threads created in
        the main thread default to daemon = False.

        The entire Python program exits when no alive non-daemon threads are
        left.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._daemonic

    @daemon.setter
    def daemon(self, daemonic):
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if self._started.is_set():
            raise RuntimeError("cannot set daemon status of active thread")
        self._daemonic = daemonic

    def isDaemon(self):
        return self.daemon

    def setDaemon(self, daemonic):
        self.daemon = daemonic

    def getName(self):
        return self.name

    def setName(self, name):
        self.name = name

    线程里面,能够拿走线程名字,getName(卡塔尔国,也能够自行设置线程名setName(卡塔尔国,私下认可情状下线程名字是:Thread-1,Thread-2;

    上面来看二个实例:

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!")


if __name__ == "__main__":
    start_time = time.time()
    for i in range(10):
        t1 = threading.Thread(target=func,args=("thread_%s" %i,))
        t1.start()
    end_time = time.time()

    print("------------------all thread is running done-----------------------")
    run_time = end_time-start_time
    print("\033[34;1m程序运行时间:\033[0m",run_time)

    上边的代码实践结果如下:

The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
------------------all thread is running done-----------------------
程序运行时间: 0.002081155776977539
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!

    上边,程序运维时间为啥独有0.00282秒,为什么不是2秒?上边来做细致的深入解析:

    首先一个程序至少有三个线程,程序自身便是主线程,主线程运维子线程,主线程是独立的,子线程也是独立的,两个之间是相互的,主线程和子线程互相独立,是相互的,各自实施各自的,主线程依然一连向下举行,子线程也在独立实践。程序本人正是线程。

    下边,大家透过列表,让种种线程自行执行完结:

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!")


if __name__ == "__main__":
    start_time = time.time()
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        t.start()
        lists.append(t)
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    end_time = time.time()

    print("------------------all thread is running done-----------------------")
    run_time = end_time-start_time
    print("\033[34;1m程序运行时间:\033[0m",run_time)
程序执行如下:
The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
------------------all thread is running done-----------------------
程序运行时间: 2.0065605640411377

    下面程序中,我们进入了多少个列表,让各类线程运维今后,放入一个列表中,然后遍历列表,让种种线程都实践完结再推行下边的主次。

    能够看看,所有线程推行实现开支的总时间是:2.0065605640411377,那正是所有线程推行的时辰。创设不经常列表,让程序实施之后,每种线程各自施行,不影响别的线程,不然就是串行的。

    join()解释:"""Wait until the thread terminates.等待线程终止(结束)

    上边程序中,大家运行了十八个线程,那么首先个运转的线程是或不是是主线程呢?不是的,主线程是程序自己,大家运转程序的时候,程序是由上而下实施的,本人便是贰个线程,这一个线程便是主线程,也即程序本身,下边大家来验证一下:

 

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!,什么线程?",threading.current_thread())


if __name__ == "__main__":
    start_time = time.time()
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        t.start()
        lists.append(t)
    print("\033[31m运行的线程数:%s\033[0m" % threading.active_count())
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    end_time = time.time()

    print("------------------all thread is running done-----------------------",threading.current_thread())
    print("当前运行的线程数:",threading.active_count())
    run_time = end_time-start_time
    print("\033[34;1m程序运行时间:\033[0m",run_time)

 

    上边程序中,大家参与了证实当前线程是还是不是是主线程,在函数和主程序里面大家都参预了印证,何况在线程未甘休和停止后进入了总括线程运转的个数,程序运维结果如下:

The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
运行的线程数:11
线程休眠了!,什么线程? <Thread(Thread-2, started 140013432059648)>
线程休眠了!,什么线程? <Thread(Thread-1, started 140013440452352)>
线程休眠了!,什么线程? <Thread(Thread-3, started 140013423666944)>
线程休眠了!,什么线程? <Thread(Thread-4, started 140013415274240)>
线程休眠了!,什么线程? <Thread(Thread-10, started 140013022988032)>
线程休眠了!,什么线程? <Thread(Thread-7, started 140013048166144)>
线程休眠了!,什么线程? <Thread(Thread-5, started 140013406881536)>
线程休眠了!,什么线程? <Thread(Thread-6, started 140013398488832)>
线程休眠了!,什么线程? <Thread(Thread-8, started 140013039773440)>
线程休眠了!,什么线程? <Thread(Thread-9, started 140013031380736)>
------------------all thread is running done----------------------- <_MainThread(MainThread, started 140013466183424)>
当前运行的线程数: 1
程序运行时间: 2.0047178268432617

    从上边程序的运作结果能够见见,在十三个线程运维后,程序是由拾个线程在运营,而且运营的线程只是只是的线程(Thread),而上面线程施行实现之后,运营的才是主线程<MainThread>;由此能够看来,程序自己才是主线程,运转程序本身,就拉开了叁个线程,当运行的线程截至后,就能够活动终止运作,被杀掉,这一点和Windows有一点点差异,在Windows上面,线程还是在激活中。

    threading.current_thread()是翻开当前线程是不是是主线程,threading.active_count()总结当前运作线程的个数。

    守护线程:主线程结束现在,其余线程都终止运转,不管别的线程是不是实行完毕。帮助管理能源。

    大家了解,若无join(卡塔尔国主线程会平素进行下去,不管其余线程是不是执行实现,不过最终都在等候别的线程实行完结之后才甘休主线程。把线程转变为护理线程,那么主程序就不会管守护线程是或不是进行完成,只需让此外线程实践达成就可以。

    上面大家把线程设置为关照线程,如下:

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!,什么线程?",threading.current_thread())


if __name__ == "__main__":
    start_time = time.time()
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
        t.start()
        lists.append(t)
    print("\033[31m运行的线程数:%s\033[0m" % threading.active_count())
    print("当前执行线程:%s" %threading.current_thread())
    # for w in lists:
    #     w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    end_time = time.time()

    print("------------------all thread is running done-----------------------",threading.current_thread())
    print("当前运行的线程数:",threading.active_count())
    run_time = end_time-start_time
    print("\033[34;1m程序运行时间:\033[0m",run_time)

    上面程序中,大家运行了十一个线程,并将其设置为护理线程,setDaemon(True卡塔尔(قطر‎,下边我们来探问程序的执市场价格况:

The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
运行的线程数:11
当前执行线程:<_MainThread(MainThread, started 140558033020672)>
------------------all thread is running done----------------------- <_MainThread(MainThread, started 140558033020672)>
当前运行的线程数: 11
程序运行时间: 0.0032095909118652344

    从程序的执行结果能够见到,当大家把运转的线程设置为护理线程之后,由于遇见IO操作,在护理线程等待的经过中,主程序已经实行完结了,由于是照顾线程,无关大局,程序结束,不管其是还是不是实行达成,能够看出,当被设置为守护线程之后,就本身在系统中运作,假若在主程序推行完结早前实行完结,则会打字与印刷结果,不然主线程关闭,守护线程一同关闭。

    setDaemon(卡塔尔(قطر‎:是把当下线程设置为守护线程。要在t.start(卡塔尔(英语:State of Qatar)线程运行在此之前。

    GIL(全局解释器锁)四核机器能够而且做4件事情,单核长久是串行的,四核CPU统一时间真正正正就有四件业务在施行,不过在Python中,无论是4核,8核,统不常间实行的线程都唯有一个,那是Python开拓时候的三个劣势,都以单核。Python总计的时候,Python解释器调用的是C语言的接口,只可以等待接口重回的结果,不能调整C语言的线程。统一时间只有一个线程能够吸取,匡正数据。别的语言都是同心同德写的线程。Python是调用C语言的线程。

    线程锁(互斥锁Mutex)

    三个经过下能够运行四个线程,多少个线程分享父进度的内部存款和储蓄器空间,也就表示每一种线程能够访谈同少年老成份数据,当时,假使2个线程同有的时候间要改革同风度翩翩份数据,会现出哪些景况?

    平常来说,那一个num结果应该是0,但在python2.7上多运维四回,会意识,最终打字与印刷出来的num结果不总是0,为何每一遍运转结果不平等吧?哈哈,很简短,借使您有A,B七个线程,这时都要对num进行减1操作,由于2个线程是出新同一时间运转的,所以2个线程很有比很大大概同临时间拿走了num=100这么些开始变量交给CPU去运算,当A线程去管理完毕果是99,但那个时候B线程运算完的结果也是99,三个线程同不经常候CPU运算的结果赋值给num变量后,结果就都以99。那么如何是好吧?非常粗大略,每一个线程在要改进公共数据时,为了幸免自身在还未改完的时候外人也来校勘此数额,能够给那些数据加生龙活虎把锁,那样任何线程想改过此数量时就务须等待你改良完结并把锁释放之后手艺再拜见此数量。

    注:不要在3.x上运行,不知为啥,3.x上的结果再三再四不错的,也许是自动加了锁。

 

    线程之间是能够互相交换的,以后上边来看二个例证,全数的线程来改过同风流潇洒份数据,如下:

 

import threading,time

def func(n):
    global num
    time.sleep(0.8)                            #sleep()是不占用CPU的CPU会执行其他的
    num  = 1                                   #所有的线程共同修改num数据

if __name__ == "__main__":
    num = 0
    lists = []
    for i in range(1000):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        # t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
        t.start()
        lists.append(t)
    print("\033[31m运行的线程数:%s\033[0m" % threading.active_count())
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    print("------------------all thread is running done-----------------------")
    print("当前运行的线程数:",threading.active_count())

    print("num:",num)                           #所有的线程共同修改一个数据

 

    上面程序中,全数线程都会操作num,让num数量加1,符合规律结果就是1000,运维结果如下:

运行的线程数:1001
------------------all thread is running done-----------------------
当前运行的线程数: 1
num: 1000

    运行结果也是1000,不过在中期版本中,常常会产出结果不是1000,而是999等看似的数,有个别系统运作总是会情不自禁,在Python3中不会失常,为啥会现出这种气象呢?

    解释器同不平日候只放行一个线程运行,申请python解释器锁,实践时间到了,没有推行完结,由于线程实行是由时光分配,假如试行时间到了,就释放全局解释器锁(gil lock),现身的来头就是由于投机从不试行实现,将要自由gil lock,未有重回;使此线程固然进行了,不过从未推行达成,其余线程得到的早先值仍然未有改换的初叶值。

图片 1

 

    如何消除这几个标题啊?要拓宽加锁,全局解释器(GIL LOCK)自身会加锁和释放锁;大家也要好给程序加锁,释放锁,让程序施行的时候,独有这么些线程在实践总结,不会因为Python的GIL LOCK释放,而前后相继还没实行完结,现身计量错误;我们自身加锁正是让线程施行完成之后在释放锁。让其余线程调用。如下:

 

import threading,time

def func(n):
    lock.acquire()                             #给线程解锁,让此线程执行完毕
    global num
    # time.sleep(0)                            #sleep()是不占用CPU的CPU会执行其他的
    num  = 1                                   #所有的线程共同修改num数据
    lock.release()

if __name__ == "__main__":
    lock = threading.Lock()                    #声明一个锁的变量
    num = 0
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        # t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
        t.start()
        lists.append(t)
    print("\033[31m运行的线程数:%s\033[0m" % threading.active_count())
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    print("------------------all thread is running done-----------------------")
    print("当前运行的线程数:",threading.active_count())

    print("num:",num)                           #所有的线程共同修改一个数据

 

     下面程序中,大家先是证明了大器晚成把锁,lock=threading.Lock(卡塔尔国,然后在实行线程中加锁,lock.acquire(),最终获释lock.release(),若是加锁的话,必要求切记,程序执行时间相比较端,由于自由锁外人手艺使用,等于让程序编程串行的了,因此,里面不能有IO操作,不可能会进行相当慢,加锁让程序功用鲜明会变慢,可是有限支撑了数码的正确性。加锁是让这一次线程施行达成才出狱,因而之后此次释放才会进行下叁次线程。

    上边程序中,程序本身施行的时候,GIL LOCK会在系统申请锁,我们团结互助给程序也加了锁。

    递归锁:假使加锁过去,会让程序不清楚怎么释放,锁死程序,由此要动用递归锁,程序如下:

import threading
'''自己写一个递归所的实例'''

def run1(num):
    lock.acquire()
    num  = 1
    lock.release()
    return num

def run2(num):
    lock.acquire()
    num  = 2
    lock.release()
    return num

def run3(x,y):
    lock.acquire()
    """执行run1"""
    res1 = run1(x)                                         #调用run1,run1里面也加锁了,是run3下面的锁
    '''执行run2'''
    res2 = run2(y)                                         #调用run2,run2里面也加锁了,是run3下面的锁,与run1平行,没有上下级关系
    lock.release()
    print("res1:",res1,"res2:",res2)

if __name__ == "__main__":
    lock = threading.Lock()
    for i in range(10):
        t = threading.Thread(target=run3,args=(1,1,))       #对run3函数加锁
        t.start()
    while threading.active_count() != 1:                    #判断活跃线程个数,当其他线程都执行完毕,只剩主线程时,就是1
        print("\033[31m活跃的线程个数:%s\033[0m" %threading.active_count())
    else:
        print("All the threading task done!!!")

    上边,大家写了七个函数,函数run3中调用run1和run2,run3里面加锁,而且run1和run2也加锁了,run1和run2是run3上面包车型地铁锁,run1和run2是平行锁,两个不设有上下级关系,以后大家来举办顺序,看是何等的结果,如下:

活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
......

    从地点施行结果能够看到,并不曾施行运营的十二个线程,由于每层都加锁,招致程序识别锁混乱,怎么着结果吗?要使用到递归锁,何为递归所吗,便是给和煦丰硕记号。

import threading
'''写一个递归锁'''

def run1():
    lock.acquire()     #加锁
    global num1
    num1  = 1
    lock.release()
    return num1

def run2():
    '''加锁'''
    lock.acquire()
    global num2
    num2  = 2
    lock.release()
    return num2

def run3():
    lock.acquire()
    res1 = run1()
    '''执行第二个调用'''
    res2 = run2()
    lock.release()
    print(res1,res2)

if __name__ == "__main__":
    num1,num2 =1,2
    lock = threading.RLock()
    for i in range(10):
        t = threading.Thread(target=run3)
        t.start()

while threading.active_count() != 1:
    print("\033[31m当前活跃的线程个数:%s\033[0m" %threading.active_count())
else:
    print("All the thread has task done!!!!")
    print(num1,num2)

     上边代码中,大家开展了改过,使用了递归锁,即有分明的出口,递归:recursion,那样,就解决了难题,如下:

2 4
3 6
4 8
5 10
6 12
7 14
8 16
9 18
10 20
11 22
当前活跃的线程个数:2
All the thread has task done!!!!
11 22

    上面程序中,结果可以看到科学的周转,并且嵌套锁未有出错,是因为运用了递归锁奥迪Q3Lock(卡塔尔(قطر‎,从地方程序中,作者也简单明白了全局变量的行使,在函数中哪些修正全局变量,首先定义三个全局变量,然后修改就能够。

    Semaphore(信号量)

    互斥锁 相同的时间只允许叁个线程改良数据,而Semaphore是还要允许一定数量的线程校订数据 ,举例厕全部3个坑,那最五只允许3个人上厕所,前面包车型大巴人只好等内部有人出来了手艺再步入。

    互斥锁:调整线程同时试行的多寡,大家能够运行几个线程,不过大家得以分明联适那个时候候间让几个线程实践,当有线程试行完结之后,增添新的线程进去,直至所有线程实行完结。

import threading,time
'''写一个递归锁'''

def run1():
    global num1
    num1  = 1
    return num1

def run2():
    global num2
    num2  = 2
    return num2

def run3():
    semaphore.acquire()
    res1 = run1()
    '''执行第二个调用'''
    res2 = run2()
    semaphore.release()
    time.sleep(2)
    print(res1,res2)

if __name__ == "__main__":
    num1,num2 =1,2
    lock = threading.RLock()
    semaphore = threading.BoundedSemaphore(5)
    for i in range(10):
        t = threading.Thread(target=run3)
        t.start()

while threading.active_count() != 1:
    print("\033[31m当前活跃的线程个数:%s\033[0m" %threading.active_count())
else:
    print("All the thread has task done!!!!")
    print(num1,num2)

    上边程序接受了非非确定性信号量,即联适当时候间只允许5个线程施行,即便起步了拾三个线程;Bounded:绑定的;Semaphore:能量信号量,BondedSemaphore:绑定的时限信号量,即同有的时候候允许运营的线程数,上边程序的周转代码如下:

当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
3 6
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
4 8
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
6 12
5 10
7 14
2 4
当前活跃的线程个数:5
8 16
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
11 22
当前活跃的线程个数:3
当前活跃的线程个数:3
当前活跃的线程个数:3
10 20
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
9 18
All the thread has task done!!!!
11 22

    从结果可以看见,实践是分批次试行的,同期只会有5个线程同有时候实践,当有线程推行完结,会补充新的线程进来。

    Events(事件)

    An event is a simple synchronization object:事件是一个简易的同台对象;

    the event represents an internal flag, and threads can wait for the flag to be set, or set or clear the flag themselves。(

该事件表示叁当中间标识,线程能够等待标记设置,或安装或湮灭标识本人。)

    event = threading.Event()   #生命一个时日

    event.wait()                #叁个客商端线程能够等待标识被设置(a client thread can wait for the flag to be set),质量评定标识位

    event.set()                 #服务器线程可以设置或重新恢复生机设置它(a server thread can set or reset it)

 

    event.clear()               #接头标识位

    If the flag is set, the wait method doesn’t do anything.(假使设置了标识,则等待方法不试行其余操作。)

    If the flag is cleared, wait will block until it becomes set again.(假使标识位已清楚,等待将封堵,直到它再度设置)。

    Any number of threads may wait for the same event.(任何数据的线程能够等待同一事件)

    上边来看一个红绿灯的次第,能够转移红绿灯以便车辆通行,当红灯的时候,车的线程等待,当绿灯的时候,车辆通行,就是三个线程人机联作的景色,使用的是事件(event),如下:

 

import threading,time

def traffic_lights():
    counter = 0
    while True:
        if counter < 30:
            print("\033[42m即将转为绿灯,准备通行!!!\033[0m")
            event.set()                          #一分钟为一个轮回,30秒以内为绿灯
            print("\033[32m绿灯,通行......\033[0m")
        elif counter >= 30 and counter <= 60:
            print("\033[41m即将转为红灯,请等待!!!\033[0m")
            event.clear()                        #清楚标志,转为红灯
            print("\033[31m红灯中,禁止通行......\033[0m")
        elif counter > 60:
            counter = 0                          #超过60秒重新计数,重新下一次循环
        counter  = 1
        time.sleep(1)                            #一秒一秒的运行

def car(name):
    '''定义车的线程,汽车就检测是否有红绿灯,通行和等待'''
    while True:
        if event.is_set():                       #存在标识位,说明是绿灯
            '''检测,如果存在标志位,说明是绿灯中,车可以通行'''
            print("[%s] is running!!!" %name)
        else:
            '''标识位不存在,说明是红灯过程中'''
            print("[%s] is waitting!!!" %name)
        time.sleep(1)

if __name__ == "__main__":
    try:
        event = threading.Event()
        lighter = threading.Thread(target=traffic_lights)
        lighter.start()
        '''启动多个车的线程'''
        for i in range(1):
            my_car = threading.Thread(target=car,args=("tesla",))
            my_car.start()
    except KeyboardInterrupt as e:
        print("线程断开了!!!")

    except Exception as e:
        print("线程断开了!!!")

 

    上面程序推行如下:

即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为红灯,请等待!!!
[tesla] is running!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
即将转为红灯,请等待!!!
[tesla] is waitting!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
即将转为红灯,请等待!!!
红灯中,禁止通行......
[tesla] is waitting!!!

    上面,大家定义了多个线程,并且达成了相互作用,使用的是事件,event.set(卡塔尔:设置事件标记符,代表实施;event.clear(卡塔尔(قطر‎:消释标志符,代表等待,唯有当新的标志符棉被服装置,才会通行。

import threading,time

def traffic_lights():
    '''设置红绿灯,会显示事件,以及由绿——黄——红、红———黄——绿的转换'''
    global counter                                                           #计时器
    counter = 0
    while True:
        if counter < 40:                                                     #绿灯通行中
            event.set()
            '''绿灯中,可以通行'''
            print("\033[42mThe light is on green light,runing!!!\033[0m")
            print("剩余通行时间:%s" %(40-counter))
        elif counter >40 and counter <= 43:
            event.clear()
            '''黄灯中,是由绿灯转为红灯的'''
            print("Yellow light is on,waitting!!!即将转为红灯!")
        elif counter > 43 and counter <= 63:
            '''红灯,由黄灯转换为红灯'''
            print("\033[41mThe red light is on!!! Waitting\033[0m")
            print("剩余红灯时间:%s" %(63-counter))
        elif counter > 63 and counter <= 66:
            '''由红灯转换为红灯,即将转为绿灯'''
            print("The yewwlow is on,Waitting!!!即将转为红灯!!")
        elif counter > 66:
            counter = 0
        counter  = 1
        time.sleep(1)

def go_through(name):
    '''通行线程,根据上面红绿灯判断是否通行'''
    while True:
        if event.is_set():
            """绿灯,可以通行"""
            print("[%s] is running!!!" %name)
        else:
            print("%s is waitting!!!" %name)
        time.sleep(1)

if __name__ == "__main__":
    event = threading.Event()
    lights = threading.Thread(target=traffic_lights)
    lights.start()

    car = threading.Thread(target=go_through,args=("tesla",))
    car.start()

    下面程序中,大家兑现了光阴提示,跟实际世界的红绿灯很常常,何况由绿--黄--红至红--黄--绿,达成往返的改变,如下所示:

The light is on green light,runing!!!
剩余通行时间:40
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:39
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:38
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:37
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:36
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:35
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:34
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:33
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:32
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:31
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:30
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:29
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:28
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:27
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:26
The light is on green light,runing!!!
剩余通行时间:25
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:24
The light is on green light,runing!!!
[tesla] is running!!!
剩余通行时间:23
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:22
The light is on green light,runing!!!
剩余通行时间:21
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:20
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:19
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:18
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:17
The light is on green light,runing!!!
剩余通行时间:16
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:15
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:14
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:13
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:12
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:11
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:10
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:9
The light is on green light,runing!!!
剩余通行时间:8
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:7
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:6
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:5
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:4
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:3
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:2
The light is on green light,runing!!!
剩余通行时间:1
[tesla] is running!!!
[tesla] is running!!!
[tesla] is running!!!
Yellow light is on,waitting!!!即将转为红灯!
tesla is waitting!!!
Yellow light is on,waitting!!!即将转为红灯!
Yellow light is on,waitting!!!即将转为红灯!
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:19
tesla is waitting!!!
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:18
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:17
The red light is on!!! Waitting
剩余红灯时间:16
tesla is waitting!!!
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:15
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:14
The red light is on!!! Waitting
剩余红灯时间:13
tesla is waitting!!!
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:12
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:11
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:10
The red light is on!!! Waitting
剩余红灯时间:9
tesla is waitting!!!
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:8
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:7
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:6
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:5
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:4
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:3
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:2
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:1
The red light is on!!! Waitting
剩余红灯时间:0
tesla is waitting!!!
tesla is waitting!!!
The yewwlow is on,Waitting!!!即将转为红灯!!
tesla is waitting!!!
The yewwlow is on,Waitting!!!即将转为红灯!!
tesla is waitting!!!
The yewwlow is on,Waitting!!!即将转为红灯!!
tesla is waitting!!!
tesla is waitting!!!
The light is on green light,runing!!!
剩余通行时间:39
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:38
[tesla] is running!!!

    上面程序中,大家兑现了红绿灯的更迭,即时间的安装与撤消,依照多个状态来决断,只有设置的时候,绿灯才通行,打消的时候,都以伺机。

本文由pc28.am发布于计算机编程,转载请注明出处:多线程与多进程,学习笔记15

上一篇:定位frame中的元素,回传书籍到kindle上 下一篇:没有了
猜你喜欢
热门排行
精彩图文
  • 那些年【深入.NET平台和C#编程】
    那些年【深入.NET平台和C#编程】
    一、深入.NET框架 ArrayList (非泛型集合  using System.Collections;) public void Text1(){ ArrayList al = new ArrayList (); al.Add ("刘德化");       //添加元素 al.Add ("张学友
  • 碰着搭建
    碰着搭建
    Appium是移动端的自动化测试工具,类似于前面所说的Selenium,利用它可以驱动Android、iOS等设备完成自动化测试,比如模拟点击、滑动、输入等操作,其官方
  • Django中的CBV和FBV示例介绍
    Django中的CBV和FBV示例介绍
    Django中的CBV和FBV Django中的CBV和FBV示例介绍,djangocbvfbv示例 前言 本文主要给大家介绍了关于Django中CBV和FBV的相关内容,分享出来供大家参考学习,下面话不
  • 将Log4net的配置配置到的独立文件中,Log4Net日志插
    将Log4net的配置配置到的独立文件中,Log4Net日志插
  • Python面向对象编程思想
    Python面向对象编程思想
    Python中的类(一) 1.面向过程编程:计算机通过一系列指令来一步一步完成任务。 面向对象编程——Object OrientedProgramming,简称OOP,是一种程序设计思想。