From 0a5a4814fa0a19173f7485721dcae5dc3562e258 Mon Sep 17 00:00:00 2001 From: czj0xyz <93525193+czj0xyz@users.noreply.github.com> Date: Sat, 17 Jun 2023 02:08:36 +0800 Subject: [PATCH] W3D2 & W7D2 & W11D2: Chen Zejia --- W11D2/W11D2Notes_ChenZejia.md | 42 +++++++++++++++++++++++++++++++++++ W3D2/W3D2Notes_ChenZejia.md | 36 ++++++++++++++++++++++++++++++ W7D2/W7D2Notes_ChenZejia.md | 34 ++++++++++++++++++++++++++++ 3 files changed, 112 insertions(+) create mode 100644 W11D2/W11D2Notes_ChenZejia.md create mode 100644 W3D2/W3D2Notes_ChenZejia.md create mode 100644 W7D2/W7D2Notes_ChenZejia.md diff --git a/W11D2/W11D2Notes_ChenZejia.md b/W11D2/W11D2Notes_ChenZejia.md new file mode 100644 index 0000000..7c511d0 --- /dev/null +++ b/W11D2/W11D2Notes_ChenZejia.md @@ -0,0 +1,42 @@ +## W11D2 notes + +**written by Chen Zejia.** + +## Multi-core/processer + +* e.g. SMC/SMP/cluster/Grid/Cloud... +* UMA: provides equal access to memory for all processors and has lower latency. +* NUMA: provides higher scalability and better memory utilization. + +### Multiprocessor Synchronization + +* Multiple spinlock: locks form a list. +* Use of multiple locks to avoid cache thrashing + +### Time/Space Sharing + +### Gang Scheduling + +* Groups sof related threads are scheduled as a unit, a gang. +* All member of a gang run at once on different timeshared CPUs. +* All gang members start and end their time slices together. + +### Blocking vs Nonblocking Calls + +### Distributed Shared Memory + +* Like page table +* "False" Sharing + +### Graph computing + +* Use graph to express show the relation(the edges represent the correlation of the data). +* Graph-Theoretic Deterministic Algorithm +* GraphChi +* less space, higher locality. + +### Object-Based Middleware + +* CORBA +* DCOM: Distributed Component Object Model + diff --git a/W3D2/W3D2Notes_ChenZejia.md b/W3D2/W3D2Notes_ChenZejia.md new file mode 100644 index 0000000..4266370 --- /dev/null +++ b/W3D2/W3D2Notes_ChenZejia.md @@ -0,0 +1,36 @@ +## W3D2 notes + +**written by Chen Zejia.** + +## Process/Thread + +### Sync + +* lock, mutex, semaphore + * mutex: shared, exclusive. It might lead to Race Condition. +* CLI: Clear Interrupt. +* Busywating: keep testing and setting(might be no atomic). +* Sleep and wake up +* Semaphone by Dijkstra +* RCU: read-copy-update + +### producer-consumer problem + +* using semaphores: use a counter/waiting queue. The order of up/down cannot be revered. +* mutex in pthread: condition variable and mutex +* Monitors: try to solve this in language level. Every time there are at most one procedure running at a time. Use condition variable to switch. + +### Barriers + +* Memory barriers: it is used to avoid the reverse of the operation of memory(eg: Tomasulo in ARH, +* Memory consistency. + +### Schedule + +* Priority, time slice(time-use unit) +* RR +* O(1): use to two queues to maintain the processes +* Staircase Down -> RSDL +* CFS +* Brain Fuck + diff --git a/W7D2/W7D2Notes_ChenZejia.md b/W7D2/W7D2Notes_ChenZejia.md new file mode 100644 index 0000000..3e23ea5 --- /dev/null +++ b/W7D2/W7D2Notes_ChenZejia.md @@ -0,0 +1,34 @@ +## W7D2 notes + +**written by Chen Zejia.** + +## Deadlock + +### Definition + +A set of process is deadlocked if: + +* Each process is waiting for an event +* That event can be caused only by another process + +### Conditions: + +* Mutual exclusion +* Hold and wait +* No preemption +* Circular wait condition + +### Solutions + +* Don't care, ignore the problem. +* Detection and recovery + * use something(e.g. matrix/graph) to maintain the resources. +* Dynamic avoidance by careful resource allocation. +* Prevention, by structurally negating one of the four required conditions: + * Mutual exclusion + * Hold and wait: Request all resources initially + * No preemption: Take resources away + * Circular wait: Order resources numerically + +### Banker's algorithm +