Locks/Reference Counting:vm_address_space:sem R/W for area creation/deletion and any other address space changesfields: areas, area_hint (is currently written in vm_area_lookup() without a write lock!),stateref_count: ensures validity of object beyond team lifetime,retrieved via the global sAddressSpaceTable's pointer (which is guarded bysAddressSpaceHashSem)vm_address_space_walk_next() is unsafe! (and obsolete, only used by the formerpage scanner)Problems: resize_area() does not lock any address spaces yet, but needs to lock all clonesvm_area:ref_count: ensures validityretrieved via the global sAreaHash's pointer (which is guarded bysAreaHashLock)vs. vm_area_lookup() which iterates over the address space's area list, notthe hash - therefore, it checks ref_count against NULL (ugly)variable fields:size, protection: essentially unguarded! (can be changed by resize_area()and set_area_protection())mappings: guarded by the global sMappingLock (currently a spinlock)address_space_next: vm_address_space::semhash_next: sAreaHashLockcache: guarded by vm_area_get_locked_cache()/sAreaCacheLockcache_next|prev: cache_ref::lockvm_cache_ref:ref_count: ensures validityvm_cache_remove_consumer(): does scary things with the ref_countfault_acquire_locked_source(): tries to get a ref through the vm_cachecache, areas: guarded by lockvm_cache:all fields: guarded by ref::lockBUT: ref may change, therefore it's generally unsafe to go from cache to refwithout holding the ref's lock (which happens, by design, in vm_cache::sourceand vm_cache::consumers)!vm_page:hash_next: guarded by sPageCacheTableLock spinlockqueue_prev|next: guarded by sPageLockcache_prev|next, cache, cache_offset: guarded by vm_cache_ref::lockmappings: guarded by the global sMappingLock (currently a spinlock)state: in vm_page only used with the sPageLock held, other uses have thecache locked the page is inwired_count, usage_count: not guarded? TBDbusy_reading, busy_writing: dummy pages onlyvm_translation_map:TBD.