os212

Home My Repo Log Links Tips Rank Key

LINKS

  1. Scele Fasilkom UI
    Student Centered E-Learning Environment Fasilkom UI

  2. Linux Journey
    Learn the ways of Linux-fu, for free.

  3. Operating Systems
    This is the CSCM-602055 Operating System course site, a GitHub Page, hosted at GitHub.com (thank you!). It is managed by VauLSMorg (vlsm.org) since 2018. This site contains links to lecture materials, exam questions, and laboratory materials. It is based on “Google Here, Google There, Try This, Try That, and then Ask Anybody (GHGTT4A2)”.

  4. Debian Documentation
    Full documentation and tutorial to use Debian Linux.

  5. Linuxtopia
    Linux for Beginners - Learning Debian GNU/Linux.

  6. What is TAR and How to Use?
    TAR is a collection of files wrapped in a single file to make storage easy. Instead of tracking the entire file folder, we only need to track one file. This website will discuss tar files in depth, including how to use, compress, and convert TAR files.

  7. Develop Your Own Filesystem with FUSE
    FUSE allows us to develop fully functional file systems that have simple API libraries, are accessible to non-privileged users, and provide secure implementations. This website will discuss how to develop your own file system using FUSE along with the steps to install FUSE.

  8. Network File System
    NFS (Network File System) is a file sharing protocol over a network. NFS share files or resources over the network / network regardless of what operating system we use. Wow, are you curious to know more about NFS? This website will discuss about NFS and its disadvantages, advantages, and even the history of NFS.

  9. Investigating Metadata
    Metadata is structured data, marked with a code so that it can be processed by a computer, describing the characteristics of information-carrying units. This website will discuss how metadata is used to expose, protect and verify abuse and excesses of power.

  10. Little and Big Endian Mystery
    Little and big endian are two ways to store multibyte data types (int, float, etc). In little endian machines, the last byte of the binary representation of the multibyte data type is stored first. Whereas in big endian machines, the first byte of the binary representation of the multibyte data type is stored first. This website will discuss in depth about little endian and big endian and the mysteries between the two.

  11. The Role of Memory on The Computer
    Computers use Memory (RAM) to temporarily store instructions and data needed to complete tasks. This allows the computer’s CPU to access instructions and data stored in memory very quickly. This website will discuss how memory acts as one of the most important indicators in a computer.

  12. Introduction to Memory Management
    Main memory refers to the physical memory which is the internal memory to the computer. The computer can only change the data that is in main memory. Therefore, every program we run and every file we access must be copied from the storage device into main memory. On this website, we will learn about memory protection, memory allocation, fragmentation, and much more!

  13. Contiguous VS Non-Contiguous Memory Allocation
    In the Operating System, there are two techniques for memory allocation, namely Contiguous Memory Allocation and Non-Contiguous Memory Allocation. In the tutorial on this website, we will learn what are the differences between the two memory allocations.

  14. Journey Across Static and Dynamic Libraries
    The nature of the library determines how the linker links to the final execution of a program. Compiled libraries have two forms, namely static libraries and dynamic libraries. Each format has advantages, disadvantages, and differs in how it is handled during the linking stage. This website will dig deeper into the meaning, journey, and differences between static libraries and dynamic libraries.

  15. Virtual Memory in Operating Systems
    Virtual Memory is a technique that separates logical memory from physical memory. Logical memory is a collection of all pages of a program. Without virtual memory, logical memory will be directly brought to main memory. This website will discuss the benefits, needs, and how to execute programs in virtual memory.

  16. An introduction to virtual memory
    Virtual memory is a technique that separates logical memory from physical memory. This website will discuss the explanation of virtual memory, the benefits of virtual memory, where all memory comes from, and how virtual memory is short

  17. Cache Memory in Computer Organization
    Cache Memory is a small memory that is temporary. On this website will be discussed about the memory level, type, performance, and mapping of the cache.

  18. What is Memory Allocation in Operating System?
    Memory allocation is used to reserve a memory block of a specified size and returns a void pointer which is the memory address of the first memory block allocated which can be cast to another form pointer. This website will discuss an in-depth explanation of memory allocation, memory allocation types, and the advantages and disadvantages of each type.

  19. Difference between Concurrency and Parallelism
    This website will discuss the difference between concurrency and parallelism. In addition, this website will also explain in more detail the meaning of these two things.

  20. Multitasking vs Multithreading
    Many people are still confused about the difference between multitasking and multithreading, this website will discuss in depth the differences between the two. We will discuss the differences between the various base comparisons so that we can understand better.

  21. How To Manage Processes from the Linux Terminal: 10 Commands You Need to Know
    The Linux terminal has a number of useful commands that can display running processes, stop them, and change their priority level. This website will tell you about classic and traditional commands, as well as some of the more useful, modern commands that make it easier for users.

  22. Process vs Thread: What’s the difference?
    This website will discuss the differences between processes and threads, which are still often confused by many people. Understanding, key differences, and properties will also be described in detail on this website.

  23. Learn and use fork(), vfork(), wait() and exec() system calls across Linux Systems
    In Linux/Unix based Operating Systems, it is important to understand about fork and vfork system calls, how they behave, how we can use them and the differences between them. To understand this, this website will help us to find out in depth. This site will discuss what they are, fork, vfork, exec and wait system calls, their distinguishing characteristics and how they can be better used.

  24. Is Concurrency Really Increase the Performance?
    If you want to improve the performance of your program, one possible solution is to add concurrent programming techniques. Basically, in concurrent execution, multiple threads of the same program are executed at the same time. This website will help you to understand further whether concurrency can really improve performance. Here, we will discuss the explanation of concurrency, various cases, problems, and serial programs related to performance increase.

  25. Semaphores in Process Synchronization
    Semaphore is a non-negative variable and is shared between threads. This variable is used to troubleshoot critical parts and to achieve process synchronization in a multiprocessing environment. This website will discuss the types of Semaphores, P and V operations, and points about P and V operations.

  26. Dining Philosopher Problem Using Semaphores
    The dining philosopher is a classic synchronization problem as it demonstrates a large class of concurrency control problems. This website will tell you the details about the definition of dining philosopher, semaphore solution to dining philosopher, and the code.

  27. What’s Race Condition?
    A race condition is a situation that may occur inside a critical section. This happens when the result of multiple thread execution in critical section differs according to the order in which the threads execute. If you want to learn more about race condition, this website will help you to understand it because it contains the meaning of race condition in details, security vulnerabilities caused by race conditions, and how to prevent race conditions.

  28. All about Semaphores in Operating System
    Semaphore is a variable that can hold only a non-negative Integer value, shared between all the threads, with operations wait and signal. This website will guide you to undertanding more about everything that is related to semaphores because it contains about semaphores history, definition, properties and types of semaphores.

  29. Petersons Algorithm in Process Synchronization
    Petersons Algorithm is used to synchronize two processes. It uses two variables, a bool array flag of size 2 and an int variable turn to accomplish it. This website contains producer consumer problem, explanation of petersons algorithm, and C program to implement petersons algorithm.

  30. Mutual exclusion in distributed system
    Mutual exclusion is a concurrency control property which is introduced to prevent race conditions. If you are interested in mutual exclusion, this website is a perfect way to learn because it will help you understand about mutual exclusion in single computer system vs distributed system, requirements of mutual exclusion algorithm, and solution to distributed mutual exclusion.

  31. Banker’s Algorithm in Operating System Example
    Banker’s algorithm used to avoid deadlock and allocate resources safely to each process in the computer system. It helps you to identify whether a loan will be given or not. This website will discuss about goals, notations, characteristics, disadvantage, and summary of banker’s algorithm.

  32. Understanding Readers-Writers Problem
    The readers-writers problem is a classical problem of process synchronization, it relates to a data set such as a file that is shared between more than one process at a time. To undertanding the details of readers-writers problem, this website will help you a lot because it contains about the meaning of readers-writers problem in general, reader process and writer process, complete with their codes.

  33. Message Passing Model of Process Communication
    Message passing model allows multiple processes to read and write data to the message queue without being connected to each other. This website will discuss about process communication, message passing model, diagram that demonstrates message passing model, advantages and disadvantage of message passing model.

  34. OS Critical Section Problem
    The critical section is a code segment where the shared variables can be accessed. For those who have interest in this topic, this website will help you to learn deaply about critical section because it will contain about the meaning of critical section problem in general and requirements of synchronization mechanisms that consist of primary and secondary along with simulated image.

  35. Strategies for handling Deadlock
    Deadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. If you experience deadlock, don’t need to worry because this website will tell you how to solve this problem. It will contains about some strategies for handling deadlock such as deadlock ignorance, deadlock prevention, deadlock avoidance, and deadlock detection and recovery.

  36. Scheduling in Real Time Systems
    Real-time systems are systems that carry real-time tasks. These tasks need to be performed immediately with a certain degree of urgency. This website will tell you the details about real time scheduling in general and specific also the classification of scheduling algorithm.

  37. CFS: Completely fair process scheduling in Linux
    Completely fair scheduling (CFS), which became part of the Linux 2.6.23 kernel in 2007, is the scheduling class for normal (as opposed to real-time) processes and therefore is named SCHED_NORMAL. If you want to dig more about CFS, this website will help you to understand it because it contains the detail meaning of CFS, some core concepts, classic preemptive scheduling vs CFS, special features, and CFS implementation.

  38. Difference between Preemptive and Cooperative Multitasking
    Multitasking is the methodology of executing multiple tasks or processes concurrently over a period of time. Preemptive and cooperative multitasking are two types of multitasking. Many people still counfused about the difference between Preemptive and Cooperative Multitasking. But, don’t worry because this website will help you guys to know exactly the difference between two of them because it will discuss about definition of preemptive multitasking and cooperative multitasking, and difference between preemptive multitasking and cooperative multitasking in many aspects.

  39. Explore load balancing
    Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers. This website contains the discussion about load balancers definition and how do they work, hardware- vs software-based load balancers, common load balancing algorithms, and why load balancing is necessary.

  40. An Overview of Non-Uniform Memory Access
    Non-uniform memory access (NUMA) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics. If you are interested in NUMA, this website is a perfect way to learn because it will help you understand about NUMA specifically, how operating systems handle NUMA memory, and how does linux handle NUMA.

  41. Multiple-Processor Scheduling in Operating System
    Multiple processor scheduling or multiprocessor scheduling focuses on designing the scheduling function for the system which is consist of more than one processor. With multiple processors in the system, the load sharing becomes feasible but it makes scheduling more complex. This website will discuss about definition, keynotes, and techniques of of multiprocessor scheduling.

  42. States of a Process
    A process is a program in execution which then forms the basis of all computation. A process is an ‘active’ entity as opposed to the program which is considered to be a ‘passive’ entity. Attributes held by the process include hardware state, memory, CPU, etc. To undertanding the details of process states, this website will help you a lot because it contains about the different type of process states, CPU and IO bound processes, types of schedulers, multiprogramming, and degree of multiprogramming.

  43. Understanding Thread Scheduling
    Many computers have only one CPU, so threads must share the CPU with other threads. The execution of multiple threads on a single CPU, in some order, is called scheduling. If you still counfused about this topic, this website will guide you to understand more about it because it will discuss about two boundary scheduling involved by threads, Leightweight Process (LWP), contention scope, and allocation domain including each process of those two controls.

  44. CPU Scheduling in Operating Systems
    CPU scheduling is a process that allows one process to use the CPU while the execution of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. For those who are interested in this topic, this website will help you to learn deaply about CPU scheduling because it contains about different time with respect to a process, wWhy we need scheduling, objectives of process scheduling algorithm, different scheduling algorithms, and many more.

  45. What is Big O Notation Explained: Space and Time Complexity
    Big O notation is a convenient way to describe how fast a function is growing. This website will dig more about what is Big O Notation and why does it matter, formal definition of Big O notation, Big O, Little O, Omega & Theta, complexity comparison between typical Big Os, time & space complexity, and best, average, worst, expected complexity.

  46. SSDs vs Hard Drives as Fast As Possible (Video)
    Usually we only know some secondary storage storage, the difference is only in speed. But actually the two of them are like apples and oranges . Because the storage is different from the structure. HDD is disk-based, really iron disks used to store data, while SSDs consist of flash memory chips, or commonly known as NAND.

  47. Everything you need to know about NAND Flash
    Yes, NAND Flash is a kind of chip in which there are cells that can store data. Each NAND Flash also has its own level, some are for consumers, and some are for industry, where the content is denser, and the performance is usually higher .

  48. How do hard drives work? - Kanawat Senanan (Video)
    How do the hard drives in our computers work? How can it develop so fast that once a small disk board can only store a few megabytes, now there are those with the same size that can store a lot of data, up to TeraByte units. It turns out, the data is stored with electricity and heat, so basically we have to find a way to be able to store binary values in bits that we can later retrieve as information, besides storing, what is the idea so that we can also read them? ️

  49. RAID Reliability Calculator | Simple MTTDL Model | ServeTheHome
    Hard Drives can also be damaged or worn-out over time, there are applications that can estimate approximately how long a drive can be damaged.

  50. RAID 0, RAID 1, RAID 10 - All You Need to Know as Fast As Possible (Video)
    We often hear about RAID but sometimes don’t know what it means. RAID means redundant array of Inexpensive Disks, meaning using multiple disks to increase performance, meaning we basically combine the use of two drives or disks. In RAID 0, if for example one of the disks is damaged, a storage failure will occur because basically there is no recovery security system.

  51. What are Drive Partitions? (Video)
    Drive Partition is basically dividing a drive into several volumes or logical drives, with different file systems. So basically we can create a container for our memory. Then there is also the partition scheme or MBR and GPT partition table. On the MBR, we can’t make partitions more than 2 TB, but we really can’t access more than that. The difference is also in the BIOS and UEFI, for more details, see the video

52.Why you Shouldn’t Low Level Format Your Hard Drive | Nostalgia Nerd (Video)
In formatting there are usually two types, the meaning of low level and high level format now and then is different. Usually the Quick format basically just ignores the index table. If we do full formatting we will generally write the data blank completely empty it, and this is tantamount to rewriting it actually. But there is another term High Level and Low Level, at low level the point is to actually do the formatting manually and has entered the mechanical level, nowadays it is no longer possible. For more details, see the video. Because the point has to do with the division of hardware sectors on our disk drives.

  1. How Risky is Updating Your BIOS? ( + Corruption Demonstration) (Video)
    BIOS is essentially a very low-level software which is the first program that runs on a computer, it is a firmware that knows how to load an operating system. Nowadays, it has been replaced with what is called UEFI. Of course, this BIOS also doesn’t really need to be updated, because of course it will be risky when you want to do it.

  2. systemd Tips and Tricks (Artikel 10 Menit)
    Systemd is the earliest process controller in linux, like the one we discussed last week, systemd is a very important process ancestor in linux, here we can take a closer look at some commands to test or see what the system is doing.

  3. Disk Scheduling Algorithms
    In addition to scheduling processes, the disk must also be managed by the operating system, why is it important? Because there are also many requests given to the disk, the difference is that there are different speeds and access methods for this disk, because usually the disk consists of several disks, moving too far can result in ineffectiveness, there are FCFS, SSTF algorithms, and some are sweeping , such as moving back and forth, and there is also CSCAN, LOOK, and CLOOK.

  4. Difference Between Serial and Parallel Transmission
    For transferring data between computers, laptops, two methods are used, namely, Serial Transmission and Parallel Transmission. There are some similarities and dissimilarities between them. One of the primary difference is that; in Serial Transmission, data is sent bit by bit whereas, in Parallel Transmission a byte (8 bits) or character is sent at a time.

  5. Abstracting device-driver development
    This article shows how you can apply an abstraction layer to the problem of device drivers for SBCs, with a common set of routines that interface the BSP and device driver. The routines enable you to write a device driver without knowing the specific BSP, underlying hardware, or processor type.

  6. Direct Memory Access (DMA)
    Direct memory access (DMA) is a method that allows an input/output (I/O) device to send or receive data directly to or from the main memory, bypassing the CPU to speed up memory operations.

  7. RECOVERING FROM LINUX HARD DRIVE FAILURES
    The tutorial began with an explanation of the physical operation of hard drives and the various issues that can lead to failure. Head rashes occur when the read-write head scrapes the drive platters. Drive spin-up causes a little bit of damage to the head, which can lead to failure after many thousand spin-ups. Excessively violent impact of the read-write head on the platter can scrap away the iron oxide coating, resulting in tiny pieces flying around the drive enclosure, leading to further damage.

  8. Fast I/O for Competitive Programming
    In competitive programming, it is important to read input as fast as possible so we save valuable time. You must have seen various problem statements saying: “Warning: Large I/O data, be careful with certain languages (though most should be OK if the algorithm is well designed)”. The key for such problems is to use Faster I/O techniques.