Thursday, January 7, 2016

Computer Networks:

Communication using Computers

 We're going to start the computer networks section, taking a look at the different communication tools that most of us use everyday, and that relay in a computer network the first tool that we're going to comment is email a technology that is more than forty years old but that never gets outdated, it's still the most used internet application and if you have a connected computer, it is almost sure that you will have one or more email accounts. The power of e-mail lies in its simplicity, send a message to an address and it will reach the receiver's inbox, who will be able to the read it asynchronously when he or she checks the mailbox, to send and receive messages you can use an email client or a webmail interface and that way you will be able to access your email from any internet connected computer with a browser. Another popular communications app that we're going to see is the chat, that is a system to exchange typed electronic messages instantly via the Internet or a celullar network. For that we use a shared software application on a personal computer or in a mobile device, the main difference between chat and mail is that chat refers to a communication in real time where sender and receiver are connected at the same time, instant rmessaging applications as the old Microsoft Messenger or as the more recent Whatsapp are a mixture of email and chat as they can be used both synchronously and asynchronously. Other aplications is voice over ip, vo-ip, that is a group of technologies that deliver voice communications and multimedia sessions over ip networks such as the Internet, other terms that are use referred to voip ip are ip telephony, internet telephony, broadband telephony or broadband phone service. It's commonly used not only for computers about also for other devices are for example you can see here internet phones, chances are that even a you are using voip sometimes when you use the telephone system for a classic call because it gets very good savings in bandwidth and in efficiency using networks, Videoconferencing uses a set of telecommunication technologies to allow two or more locations to communicate by simultaneously using two-way video and audio transmissions. It has also been called a visual collaboration and it's a type of groupware. With the introduction of relatively low cost and high capacity broadband telecommunication services in the late nineties, coupled with powerful computing processors and powerful video compression techniques Videoconference has made significant inroad in business, education, medicine and media among other use. and finally we're going to speak about cellphones, because the introduction of internet connected cell phones has changed the rules of the entire telecommunications game. The use of sms and voice calls is been gradually replaced by messages and calls over the Internet with tools like whatsapp or voice-ip like Viber that connect everybody to the net at a marginal cost.

Computer Networks
 What's a computer network? we're going to revisit the definition we use in the first course of the series, information systems and computer applications series, and we're going to say that a computer network or data network is a telecommunication network which allows computers and other electronic devices to exchange data. To do so they are connected through network links that can be wired or wireless. In short, a computer network is a set of computers that can communicate with each other and all communication services that we use nowadays are built over computer networks. Computer networks, as any other network, are composed of nodes. In computer networks a node is an electronic device that is capable of sending receiving or forwarding data using a communication media this way a network is defined as a group of two or more nodes designed to send and receive information sharing hardware and software resources. In data communications a physical network node may be either a data communication equipment DCE, such a modem, hub, bridge router o switch or a data terminal equipment DTE such as a digital telephone handset, a printe,r a computer or a server. There are lots of different ways of classifying computer networks, the most general one is, perhaps, to differenciate between wired and wireless networks. The main difference other and the obvious transmission media are that wireless connections are usually slower and this is the main reason why most big network networks still depend on wired ethernet for the bulk of their connections, and even wired network can use fiber optic cable instead of  copper wire cabling to achieve even faster speeds. On the other hand wireless networks can be more flexible and comfortable for users and require less infrastructure. For example a wireless base station can provide connections for up to two hundred and fifty simultaneous users So a  large wireless networks with fifty or more personal computers, for example, requires far less hardware than a wired network for the same name number of computers. Now we are going to speak about the biggest of the computer networks: Internet. Internet is the network of networks, it is and global system of interconnected computer networks that use a standard set of rules to communicate. This standard set of rules is called the internet protocol suite tcp/ip and it's based in the switching of independent packets of data. Here, in the slide, we can see a visualization of routing paths through a portion, a small portion of internet, and you can appreciate its complexity nowadays. If we speak about the number of connected devices, according to Gartner there will be almost five as billion connected things that they define as network or internet enabled devices by the end of two thousand fifteen and twenty five billion by the end of two thousand twenty, that these three for every person on the planet, There is a consensus in that the rising number of connected devices of all sorts and the explosion in the use of wireless devices in business and homes, are creating a step-change in the use of technology, but there is not much agreement on its dimensions and its duration. For example, Cisco predicted in two thousand thirteen that twice as many devices, about fifty us billion devices should be connected by the end of two thousand and twenty and Juniper research revealed in july two thousand and fifteen that this number will be around thirty eight point five US billion. As you can see these are very different numbers, all of them predict a huge increment, but they don't agree in how big is going to be. But the reality in the Western World is that from your laptop to your clock or from tablet to your fridge, more and more stuff is being interconnected Nowadays we have nearly reached the point where is strange not to be online all day long in this part of the world in the Western World.The term Internet of Things, you can see here a slide, was coined by a British enterpreneur called Kevin Ashton in nineteen ninety nine, typically IOT, or Internet of Things, is expected to offer advanced connectivity of devices, systems and services that goes beyond machine to machine communication that its 73 00:05:04,304 --> 00:05:11,311 acronym is M2M. And it covers a variety of protocols, domains and applications where things are connected to internet. The interconnection of these embedded devices, including smart objects, is expected to usher automation in nearly all fields

Network Classified by Area Covered
 Let's look at computer network classification based on distance. So we start with a LAN, local area network. A LAN is a computer network that interconnects computers within a limited area, such as for example a residence, a school, a laboratory or an office building. All the components of the LAN are locally managed. The two most common transmission technologies for LANs are Ethernet over Unshielded Twisted Pair or Fiber for desktop access and Wi-Fi for wireless access. So simpler LANs generally consists of one or more LAN switches that are connected to computer hosts and then the LAN can be connected to a router, a cable modem or an ADSL modem to have internet access. So LANs can maintain connections with other lans via leased lines or leased services, or using the internet with some virtual private network technology for example. A LAN, a local-area network as opposed to a wide area network, a WAN. A WAN is a telecommunications network computer network that extends over a large geographical distance. So wide area networks are often established with leased telecommunications circuits. If a WAN is confined to ,for example, a city or part of it, it's also called sometimes a MAN a metropolitan area network. And business, education amd government entities actually use wide area networks to connect to different branches are spread across different geographical locations. So some network providers offer services that can consists of a connection to a WAN but are bundled with additional services, for example for a specific business application, that you could need, and this is called a VAN, value added network. We have seen the concept of virtual private network and this is actually a mechanism which you can use to extend a private network across a public insecure network, as isfor example the Internet, So a virtual private network, a VPN, enables users to send and receive data across a shared or public network as if their computer devices we're actually directly connected to the private network and this way to have all the functionalities, security and management policies that the private network offers So a VPN is created by establishing a virtual point to point connection between computer or remote network and the private network and this is being done by something called virtual tunneling protocols, and these protocols encrypt the content of the communication, so from a user perspective the extended network resources are accessed in the same way as the resource would be available when they would be into the private network. So this allows employees to securely access corporate internet while traveling outside of the office and similarity vpns, virtual private networks, they securely connect geographically separate branches of an organization to the central network using cheap communication resources as the Internet.So these virtual private network technologies are also used by individual internet user to make sure that they secure their wireless transactions or to circumvent geographical restrictions and censorship for example and to connect to a proxy server for the purpose of protecting personal identity and location. So you can find a lot of cheap vpn services offered on the internet for this purpose and in the modern operating systems such as Windows or Mac nowadays it's very easy to create a remote access to virtual private network using the network control panel so that's the basic computer network classification based on the distance that the network is in.

Distributed System
 In this unit we are going to speak about how computers use the network to work together. There are several models that in some cases called architectures, that computers can follow when they collaborate using a network and in this video we are going to see the most significants. And We are also speak a little bit about network operating system concepts and see how they have now been incorporated in modern operating system. So let's start with the first model of colaboration that is the central computer whith terminals. So this model was developed in the early nineteen seventies when many mainframes developed interactive user interfaces and they operated as time shared computers. They would support hundreds of users simultaneously and they would use batch processing to do that. So at first the users logged in with a dumb local terminal connected with serial ports, which soon became remote terminals that used modems to connect to the computers. So in this model all the computer power is in the central computer and the remote systems acts only as input- output devices and it is used with mainframes and also supercomputers. So to access as a remote terminal with a personal computer we would use a terminal emulation software A typical example of this model is for example point of sale devices that you can find in department stores that are connected to a central computer. So the next model is peer to peer networking. And in peer to peer networking users are able to share resources in files located on the computers and they can access shared resources from others and this can all be done without the need for central coordination by a server or some stable host. So in peer to peer networks, as the name suggests all connected computers are equals, they are peers and they all share the same abilities use resources available in the networks. So peers can be both suppliers and consumers of resources and they all together are set to form a peer to peer network of nodes. So peer to peer networking has been made popular by file sharing systems, like napster and then we have for example an algorithm that is incorporate into the peer to peer communication protocol and these algorithms make sure that load is balanced and even peers with modest resources can help to share the load. Files in a peer to peer architecture are split into small pieces and they are available in every node that is downloading the file. So if a node becomes unavailable its shared resources remain available as long as other peers that offer it remain online. So ideally a peer does not need to achieve high availability because other redundant peers can make up for any resource down time. And as the availability of load capacity of peers change the protocol just re-route the requests. Emerging colaborative peer to peer systems are going now beyond the era of peers doing similar things while sharing resources but actually looking into having diverse peers, that can bring unique resources and capabilities to offer to hte community and thereby empowering it to engaging in creative tasks, beyond those can be accomplished by individual peers but can be beneficial to all the peers in the network. And the next model we'll see is the client server architecture there a server provides a function or service to one or many clients and these clients initiates requests for services at the server. So client-server architectures allow network administrators to centralize functions and applications in one or more dedicated servers. So the server is the centre of the system giving access to resources and instituting security. And the server's job is just to respond to services requests from clients following the policies that are established by the company. While client's job is to use the service provided in response in order to perform some task. And the client and servers are completely independent instances. Server can be classified by the services they provide, so for example, you have web servers, and web servers serve web pages, files servers serves computer files and database servers serves database queries and requests. So a server can offer services sharing one of its resources: hardware or software or programs or even data. And can it offer these by sharing them and also the processes and the storage devices. So a single computer can actually offer several services at the same time, so one single computer can be a web server, a database server and also maybe a file server and it can run these at the same time to serve different data to clients making different kinds of requests. So whether a computer acts as a client, a server or both, this is determined by the nature of the application that requires the service.  So for example client software can communicate with the server software within the same computer and then the same computer is the server as well as the client. But that can also be done with separate computers and communication between servers such as to synchronize data for example is called inter-server or server-to-server communication. The advantages of client-server architecture include for example that we have centralized server that are more stable, we have security policies that are easily implemented in the server and new technology and hardware can be easily integrated into the system. Then another advantages is that hardware and operating systems can be specialized with a focus on performance and it's easier to setup and update back-ups. Another advantage is that servers are able to be accessed  remotely from different locations and different types of systems. There's obviously also some disadvantages and those includes that the cost of buying and running a server can be quite high. Another disadvantages that there's a dependence on having a central location for operation and this can create bottlenecks and sensible point of failure. Another drawback is that it requires regular maintenance and updates and also that servers can be more difficult to configure. So in client-server architectures computing power, memory and storage requirements of the server must be scaled appropriately to the expected workload, so load balancing and failover systems are often employed to avoid these bottlenecks and minimize the effects of server failure. So networking operating system is a terminology that can be used to refer to two rather different concepts. So on one hand it can refer to a specialized operating system for network devices, for example routers, switches or firewalls, but it can also be used to refer to an operating system which is oriented to computer networking, conceived as an extension to early personal computer operating systems that were designed for a single user using a single computer. So this operating system, this network operating system allows for shared files and printer access among multiple computers in the network and it enables the sharing of data, users, groups, security applications and other networking functions that we can have. Network operating systems for personal computers have disappeared a long time ago, as the functions that they can offer are now just integrated and all come in operating system. So we can actually say that modern personal operating systems nowadays are network operating systems. So if we look at some naming there were peer to peer operating systems such as windows for workgroups, Apple Share or LANTastics and client-server network operating systems we know are for example Novell Netware on Banyan Vines.

Enterprise Network
 Let's look at enterprise network architectures. An enterprise network is a concept that refers to all the systems and procedures that a company has to interact its different departments and workgroups, and this way facilitates the insight and data accessibility. So the key purpose of enterprise network is first of all to integrate the different company's workgroups and eliminate isolated users, so this way all departments in a company are able to communicate effectively and provide access to information relevant to them. Then the second purpose is that deals with the performance, reliability, and security of all physical systems and components that are used to achieve this connectivity. Enterprise network architectures were developed to establish communication protocols and strategy to achieve these two purposes. So an enterprise network includes local as well as wide area networks and just integrate all the systems in a company. So these include windows, apple, personal computers, unix systems, mainframes and any other access devices like smartphones or tablets. So the concept of BYOD, bring your own device, now integrates into the enterprise network and refers to employees using their own smartphones as part of the network. And this is actually gaining momentum across companies of all sizesnowadays and supposes quite a challenge to network administrators as now they have less control of the devices that are connected to the company network  and obviously this can have consequences on the performance and security issues of the network. So today's most enterprise networks use tcp/ip as their base technology and as you might remember tcp/ip is the same protocol that runs the Internet and its enables organizations to connect workgroups and local area networks and different locations taking advantage of the internet infrastructure. So corporate applications based in propierty  technologies are being substituted by applications based on web protocols, and these protocols are now used to build intranets with integrated user interfaces and the web browser is used as a universal clients that connects all the web services and provides application specific functionalities. As any other large complex system, enterprise network should be designed as structured systems based on two complementary principles: hierarchy on one hand and modularity on the other hand. So the idea is to divide complex problems up in a set of simpler ones and solve each one independently following a hierarchal structure in the design, and this way we can just repeat steps several times until we have a series of problems that are easier to manage. Designing a modular enterprise network has series of advantages first of all it eases design and deployment second of all it's simplifies management and operations and it enhances reliability. So the modules can be conceived as the building blocks of the systems, they can be designed with some independence from the overall design and obviously this simplifies the process. So then when the network is up and running modules can be operated and managed as semi-independent elements and this simplifies management and operation and enhances reliability as failures can occur in a module can be isolated from the remainder of the network and this provides for simpler problem detection and higher overall system reliability. Network changes, upgrades or the introduction of new services can be made in staged fashion and models can just can be upgraded or substituted independently. So this makes it easier also to upgrade. Return of investment is also improved as modules can be re-used and adapted in several parts of the network.

UPValenciaX: ISC101.2x Information Systems and Computer Applications, Part 2: Hardware

Inside the Box: MotherBoard CPU and Memory

 Let's look at the computer's motherboard. On modern personal computer motherboards we have first full power connectors, to get electrical power from the power supply. We have sockets to install the cpu. In some cases cpu is directly soldered into the computer. We can see some slots to install the system's main memory, a chipset, which interfaces the cpu with the main memory and the buses , we see non-volatile memory chips containing the system's firmware that it's needed to load the operating system form hard disk( is actually known as the BIOS which stands for basic input output system), we have some cmos memory chips and their batteries, a clock generator which produces the system clock signal to synchronize the various components on the motherboard; we have slots for expansion cards that give access to to the buses managed by the chipset;we have an integrated controller for permanent storage devices which is typically a SATA bus driver and the connectors we need; we see here an integrated controller for keyboard and the mouse,and in legacy computers actually we will find a serial and parallel ports for these devices but all of them have been substituted by USB bus nowadays; and one or several integrated usb bus controllers to connect external devices and we can see here two and as we've seen the current USB standard nowadays is 3.1 . We also see on the motherboard heat sinks and mounting points for fans to take care of the dissipation and excess heat and, in modern motherboards, a lot of this functions that were initially provided with expansion cards are now integrated, like audio and video, we can find graphics controller, sound-card and gigabyte network controller already on the motherboard which used to be with expansion cards. So talking about expansion cards:  expansion cards that  you have to put into expansion slots, is actually the most visible point from a motherboard, you can actually use it to extend the functionalities of the motherboard. So expansion slots are connectors, you can see one here on the slide, where you can put an expansion cards, and the most used cards we have are sound-cards, network cards, graphic cards for example. But, as we indicated, the more and more functions have been incorporated into the motherboard so the importance of expansion slots has decreased because you already have all the functionality you need on the motherboard, you don't need to expand it. So the first expansion slot technology in pc cards was called ISA and then it turned into PCI which was introduced and later it evolved into PCIexpress which is also called PCIe, and in all the motherboard, you usually find PCIe slots and some PCI ones for legacy expansion cards, that's the old technology. You will also find a higher bandwidth PCIe slot to connect an external graphics card in case you need more graphics performance for computer than the one you get which the graphic card it's integrated into the motherboard. So for main memory personal computers use a type of RAM called dynamic ram or dram, and this is packed into memory modules that are called DIMM which stands for 'dual inline memory modules' and the motherboard usually has several slots to add more of these memory modules. So memory capacity of DIMM modules currently installed in personal computers goes from one to sixteen gigabytes. And these modules these dimm modules use technology called Dual Data Rate which is abbreviated by DDR and this is able to exchange data twice per dram clock-cycle,what we've seen in previous videos, and the most recent version of ddr is ddr 4. A lot of motherboards have dual-channel enabled memory controllers and these use separate channel to communicate memory with the cpu, and this theoretically multiplies the data rate exchange by two, and this means you will can do it faster. And in these motherboards memory layout tipically have color-coded dimm sockets so you know which one is which. So chipsets, which is also in a motherboard, is one of the several integrated circuits that manage the interface between the cpu, the main memory and the external devices. So it used to be made out of two chips which are known as the northbridge and the southbridge. The northbridge what it does is link the CPU to very high speed devices, especially dram and graphics controller and the southbridge connects it to the low-speed external buses as hard disks and USB buses. So the north bridge connects directly to cpu using an interface that is traditionally known as the front side bus FSB, and the southbridge connects to the north bridge. In many modern chipsets the northbridge has been integrated into the processor chip and southbridge contains some of the integrated external devices such as ethernet, USB and audio devices. So we have seen a little bit about the components that are present on the motherboard of a personal computer.




CPU

 So let's look inside the box, inside the computer and look at the cpu, the central processing unit. So the central processing unit or cpu is the component of the computer that executes all the instructions of software programs stored in the computer's memory. So the CPU is actually the brain of the computer. And here on the slide you see you see a wafer. A wafer is the way processors are being manufactured silicon wafers and each one of those can include five hundred to a thousand processors at once. And in modern CPUs we can actually find more than two thousand million transistors. This number is actually increasing because the miniaturization process achieves that we can manufacture components that are smaller and smaller every time. And this was actually are outlined already by Gordon Moore and he proposed in nineteen sixty-five that the evolution was going to be that components we're going to get smaller and smaller and he actually predicted that the number of components in an integrated circuit will doubled each eighteen months. Here you can see the theory on this slide. This is known actually as Moore's law and nowadays it's still followed because things are getting smaller and smaller, evidently won't last forever that's impossible and there are some studies that suggest that the final of this law will arrive some about twenty years. So if we look at the processors we have in our computers nowadays are developed with fourteen nanometers technology. So you can see on the slide we had twenty two before, we are now with fourteen and development and research is going towards ten and seven. And this means that with fourteen nanometers technology in a plate of eighty two millimeters we can have near like two billions of transistors, that's a lot. So just to see what is fourteen nanometers exactly, here you can see on the slide some things that we know. Here is a guy, his name is Mark, he's one sixty six meters, here's the fly that is the seven millimeters here's a mite, is getting smaller, a blood cell, here's a virus that is a hundred nanometers and here's somewhere are fourteen nanometers processors that we have seen. So this is really really small and is about a fifty times bigger than a single silicon atom. So the computer with this CPU is capable of doing very complex tasks. But the CPU itself actually only executes very simple instructions. Is the software and the way is written that decices how it does the hard work of converting very complex task into the simple constructions that CPU does. So, on the slide we can see how CPU works a little bit and what does it's able to get data from the memory, perform very simple arithmetic or logical operations like comparison, greater than or equal and it can jump to different parts of a program depending on the results of the comparison that has been done. And it can put some data into memory and nothing more. It's basically that. So, how it works? Here you can see first there is a prefetch unit that extracts the next instructions from memory and then there's decode unit, tha decodes it to obtain information of the operation. So then, it fetches the data that is needed from memory  and put it into the CPU internal memory, this is called the CPU registers. It's the internal memory that is in the CPU. So then there is the arithmetic and logic unit the ALU which is in charge of performing the operation with the data stored in registers and it obtains the results. These results are return later on back to the main memory from the internal registers. So, once this have been done the next instruction is loaded from ram memory. And sometimes before loading new instructions we have to jump to another memory location and execute instructions that are there. Where you have to jump depends on some comparison that CPU has done with the information that is present in the registers. So the control unit actually organizes all this complete process synchronised to a central clock. So the performance of the CPU depends on several factors and one of the most relevant is the speed. The speed of CPU is measured in hertz, which is cycles per second. Herts is not a measure of speed, it actually measures the frequency of the internal clock and usually more hertz means more processor speed, but this is not always like that because there's other factors that actually influence processing speed. So CPU that are used in nowadays computers have clock frequencies in the gigahertz range where one gigahertz is equivalet to one US billion hertz. So the problem is that with high frequency the temperature rises and with miniaturization heat dissipation becomes harder. So you will have noticed that several years ago there was an increment in processor clock speed of new processors, but this increment got stuck. And actually nowadays they are trying to improve the performance using other techniques, as the heat dissipation limit is near of what technology can actually achieve today. So the clock gives a signal with a given frequency and each instruction that the CPU needs to execute needs a specific number of clock cycles to be executed. So let's look at an example, here on the slide we see the clock and we see and we assume that we have an instruction that needs three point five clock cycles to be executed. Here you see three point five clock cycles one two, three and a half, three and a half clock cycles and this instruction needs that. So if the computer works at one hertz, this mean one cycle per second. This means that this instruction needs three and a half seconds to be completed but evidently if it works at two hertz, to cycles per second, this instruction would only need one point seventy five seconds to finish and it would be quicker. So the speed of modern CPUs is measured in millions of operations by second and that's why given the same processor the more gigahertz the faster it is. But let's look at other issues that determine the speed of a processor. Another important feature is word length and this is basically the number of bits that the CPU can receive when it accesses the memory. So you can imagine it as the number of lanes in a highway. When we increase the number of bits that can be transferred simultaneously the performance of the CPU evidently improves. So another characteristic that affects the CPU performance is the number of cores that the processor has. A processor with one core can only execute one instruction at the same time if we had another core and we have two then it can execute two instructions at the same time and different programs can be executed in parallel. And this is obviously results in a processor which is twice as fast. Here the slide we see it a dual core processor which is a processor with two cores. So current CPU for personal computers have four or even six cores. You can see on the slide for example a quad core processor, a processor with four cores. These cores can be combined with another technology that is called hyper threading and hyper threading allows actually to execute two different threads of the same program almost in parallel. So for example each tab of a web browser or each avatar of a video game can run in an independent thread. And as a result a single processor can execute eight instructions in parallel, so this will obviously increase the performance of the CPU. Here you see on the slide a quad core processor that can have like eight threads running at the same time. So in the consumer market there are two main processor manufacturers Intel and AMD. And the products are compatible so we can execute the same instructions in all of them. SO for desk computer Intel has the i3, i5, i7 models with some specific design for laptops and tablets. And equivalent in AMD are the athlon processors. So both brands have very powerful CPU oriented to workstations and servers segments, there are the Xeon and the Opteron  respectively for these sectors. So this is the computer and it's components and we've looked inside the box at the CPU.

Memory
 So let's look again inside the box and let's look at the memory. That is also a very fundamental component of the computer. So the main memory of the computer is called random access memory which is abbreviated to RAM, RAM memory. So according to the Von Neuman architecture the RAM stores temporarily both the instructions of the program that are being executed and the data that these instructions need to execute. So the positions of the memory are like postal boxes, each one of them is identified by number and it contains a sequence of binary digits in it which is called a word and this is of a fixed length. So the access time to any memory location is the same and the access time is independent of the address and this is actually why it's called random access memory, because it takes the same time to retrieve the information from any random address. This is different that in other storage medium like the hard disks or the CD, because actually these have to rotate to arrive to the desired position and then the access time varies depending on the position where information is on the disc. So another important type of memory computer is ROM. This is an abbreviation of read only memory and is a type of memory recorded by the manufacturer and we can only read it and the advantage of it is that the content is never erased. So ROM memory plays an important role in the boot process for computers since the programs it contains are the first ones to be executed and so they check the computer components and load the operating system to continue the starting process and since it is read only, this content is never erase. So there's another type of memory which is CMOS memory and cmos stands for complimentary metal oxide semiconductor (CMOS) and this makes reference just to the material it's made off and this is very low power memory. And so remember that ram memories are volatile, store content temporarily and all the information that is stored is lost when you power off machine. So to keep some information about basic hardware settings as the date and time and things like that a small cmos memory powered by battery can be used to remember these things for the next time we start the computer. So there's several types of computer memory and each one of them has obviously its advantages and disadvantages, like everything in life actually. There's always a trade off between prize, speed and persistence and you cannot for example have a memory that is cheap, fast and permanent at the same time. But you need this three characteristics to make computers run. So this is why memory hierarchies are established in the computer and you put fast, expensive and small storage options close to CPU and slower but larger and cheaper options further away. The first level of the memory hierarchy after CPU registers is actually the cache memory. So if you want to compute to be fast you need fast access to the program instructions that you need to execute and the data that is needed to execute them. So fast memories are expensive so cache memory was created as a mechanism to have faster access to program instructions and most used data without spending too much memory. So the data you would read from memory and put into registers would come from the cache. This is just a small amount of ram memory that CPU can access very fast and it uses an algorithm to select the most frequently used data and store copies of it. So most of nowadays computer processor integrate cache memory in the CPU chip. Having independent caches for instructions and data. Data caches usually are organized as L1, L2 and L3 and this depends a little bit on speed. So the next level of memory hierarchy is ram that we've seen, random access memory and this is known as the main memory or primary storage memory of the computer. It's fast but volatile as I said, you need another level in the hierarchy that is slower but can be permanent and there you have, for example, magnetic and solid-state hard drives that are also known as secondary storage, where you can keep data and it will not disappear. So in most personal computers we have a lot of different applications open at the same time but we're not really multitasking, we're actually switching between. So when there is not enough ram memory left what operating system doess is a clever trick which is called virtual memory. This consists of setting aside a certain portion of storage on the hard disk to act as additional ram, and moving there the content of the memory of the application that loses focus and bringing it back into ram when it gets focus again. So this way operating system can work with a virtual memory space which is much bigger than the physical ram memory it really has. And this is actually one of the reasons why sometimes your computer freezes for a short time when you switch from one application into the other, because actually it's reloading memory content from hard diks into ram memory. And to manage all this process the operating system organizes memory assigned to different applications in pages that can be move between ram and hard disk. And this is called memory pagination and files were RAM content is stored in the hard disk are called pagefiles or swapfiles. So this has been the memory where we've looked inside the box of the computer.


UPValenciaX: ISC101.2x Information Systems and Computer Applications, Part 2: Hardware

Computer Architecture

 Let's examine the functionality and organization of the computer, and this is called the computer architecture. So a computer is a machine that transforms input data into output information through the execution of instructions of a software program. So therefore the components that find at any computer are the ones we see here in the picture. So we have input devices we have the CPU, we have storage, memory and output devices. So the input devices like the keyboard and the mouse are used to get data from the user or the environment and introduce it into the system. Then the CPU, the central processing unit executes the instructions of a program one by one fetches the data, does the operations and returns the results into the memory. And the main or primary memory, that you see here on the slides. stores all the data along with the instructions of the program that controls the computer. Then we have a set of output devices to send the obtained results back to the user, depending on their nature, so this is for example the screen, it can be the printer where can print something, or it can be the speaker if is audio results what we're looking at. The permanent storage devices that keep the program for future execution and data and information for future queries these are considered input and output devices since they perform both operations, because they have to input data and have the output data where they store it. So the architecture of modern computers is known as the Von Neumann architecture and its main features is that the main memory stores temporarily data and instructions. There's other architectures that were defined at the beginning of computing like for example the harvard architecture but they were more complex and  haven't been successful. So here on this slide we see how the Von Neumann architecture works you can see that all the components are connected to the system bus and this is formed by a data bus, the address bus, and a control bus. And data is just exchanged between components using the data bus. The address bus indicates which devices are being accesed and finally the control bus transport state signals between different devices. So this in short we have seen how the computer architect is formed and how functionality and organization of a computer is it.
Top Courses in Network & Security 728x90
Personal Computer Architecture

 We are going to see how the hardware in a personal computers is organized. So pc was designed with an open architecture and this means that it uses standard modular components and it's open. So we can add, replace, update it, swap them easily and the computer will identify and handle the new devices automatically. So the main component of a computer system is the motherboard or the main board. And this is a printed circuit board, it is called pcb, that holds the main components of the computer and all the electronics needed to communicate between them and and we can also use it to expand the system. So we could say that this is the central nervous system of the computer. The motherboard provides the electrical connections by which the other components of the system communicate. it's unlike a back plane since it also contains the central processing unit of the cpu and hosts other subsystems and devices. The form factor is the specification of the motherboard or main board and specifies the dimensions, the power supply type, locations of mounting holes, number of ports on the back panel, etc. So in IBM PC compatible industry, standard form factors ensure that parts are interchangeable across competing pc vendors and generations of technologies. And this in contrast to enterprise computing were form factors ensure that server modules fit into existing rack mounting systems. So traditionally the most significant specifications is for that of the motherboard and it generally dictates the overall size of the case. So the most used form factor for IBM pc compatible motherboard is atx, that's stands for advanced technology extended and its derivatives. Small form-factor main board mini-ITX is the de facto standard and here on the slides we see a couple of them and you can see more or less what size they have comparing them to the pen you can also see here on the slide. A power supply unit PSU is responsible to convert AC to low-voltage required regulated dc power and this is needed for the internal components of the computer. Modern personal computers universally use switched-mode power supply and some power supplies have a manual selector for input voltage, while others automatically can adapt to the supply voltage that is needed. Most modern desktop personal computer power supplies conform to the atx specifications, which includes form factor and voltage tolerance. So atx power supply unit supplies +3.3V, +5v, +12v and -12v. While an atx power supplies is connected to the mains supply it always provides a +5v standby which is 5VSB voltage so that it can standby functions on the computer and certain peripherals are powered. ATX power supplies are turned on and off by and originated signal from the motherboard. A computer case which is also known as the computer chasis or the computer tower, the system unit, the cabinet, the base unit there's lots of words that are used to to give a name to the computer case. This is the enclosure that contains all the components of the computer. It's sometimes also referred as the CPU which is incorrect because it contains the CPU on the motherboard, but it contains a lot more. So cases come in many different sizes, there are all these form factors and they determine the size and shape of a computer case. So form factors for rack-mounted and blade servers may include precise external dimensions as well, since these cases make themselves fit into specific enclosures. So a case designed for an atx motherboard that we've seen in the previous slide and the power supply may take on several forms that you might have seen you have, for example the vertical tower, which is designed to sit on the floor, it's vertical and the heigth is bigger than the width, we also have flat desktop ones which are flat and then obviously the height is less than the width. Top Courses in IT & Software 728x90And we also have them like in a pizza box form in which height is less than five centimeters and this is designed to sit on the desk under computer's monitor, so you can put the monitor on top of it. Tower cases are often categorized as mid-tower, mini-tower or full tower. And full tower case are typically larger in volume than desktop cases so they have more room for drive bays and expansion slots. Desktop cases and mini-tower cases are popular in bussines environments where space is at a premium. So for high performance computers heat dissipation is a major issue that has to be taken into account. And these computers have their own form factor and case design. So, a disk bay is the place reserved for storage devices such as hard disks, cds, dvd units and these are all called bays. And the number we have depends of the size of the case, depends on the the amount that fit into it. So we've see a few internal components of a personal computer and we are going to see how they interface with external devices. So the computer as a number of connectors available to connect external devices that we've seen in previous videos and these connectors are called ports and in a standard computer well there's lots of ports that you can use to connect different external devices. Like for example screen ports, like VGA, DVI, HDMI to connect screens and another video devices, Sound ports that you can use to connect headphones, speakers, microphones for audio. General-purpose ports like USB that you can use to connect lots of things, hard disks, printers, scanners, the mouse the keyboard, all the other devices we've seen in previous videos. And there are network ports like Gigabit ethernet network interface adaptors, or bays that you can use for different type of flash memory card for example. In older computers you can also find serial ports or parallel port for printers or dedicated mouse and keyboard ports and moded ports for example. And other ports, like for example firewire buses. But nowadays we don't find these dedicated ports so much anymore because USB is the standard technology to connect any kind of devices to your personal computers. For USB there are several versions. The currrent one is three point one which supports very high transmission capacity wich is also called bandwidth. And the old USB 1.0 port that is now not suitable anymore for high bandwidth devices such as hard disks. USB 2.0 is still being used and, this or higher version ports should be used to connect them. Thunderbolt is another high-speed general-purpose port that can be found mostly in apple computers. So we've looked a little bit in this video on the computer hardware and how it's organized in your personal computer.


Instructor Quote - The Complete iOS 9 Developer Course - Build 18 Apps

UPValenciaX: ISC101.2x Information Systems and Computer Applications, Part 2: Hardware

The computer and it components

WHAT IS A COMPUTER, HARDWARE AND SOFWARE. DATA AND INFORMATION. THE IPOS CYCLE


So, let's look at what is a computer. On the slide here all the devices you see are computers. Some are more specialized than others, but they are all computers. So, what is computer? A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. So this sequence of operations it carries out, is called a computer program. And computer programs can be changed anyway we want and in this way the computer can solve many different problems. So all computers we know today are digital electronic computers based on microprocessors, but there was a time when the computers were mechanical devices. So if we look at what computers do computers basically store and process data to generate useful information. So, here we see data goes into the computer and information comes out. So data is normally used to refer to raw and unprocessed data and information is the result we get after processing this data. And this information for example can be used to draw conclusions or make decisions. So the information that we obtained in one processing stage can actually also be fed again as raw data to the next processing cycle to get new information. So when you think of a computer you think of an electronic device. But, actually this electronic device would be useless if it would not be for the software programs that make it work. So the computer definition includes the electronic device and its components, called hardware, and the computer programs that control it, called software. Software can be compared to an instruction booklet of a ready to assemble furniture for example as it has a step-bystep assembly instructions that guides the whole assembly process and indicates how you can make something. Computer hardware on the other hand is all the physical components of a computer system and these can be located inside the system or outside. In case they are outside of system we call them peripherals. So for example these components that are outside of the computer are keyboard, mouse and other input devices, it's for example the CPU, the memory, another chips needed to connect all the components. We have monitor, printer and any other output device we can think of. There is hard disks, DVD unit, flash memory drives, that can be used to store information permanently. There are communications devices that are used to connect to a local or remote network, like network cards or modems. And we have all the support electronics needed to make the system run,  as power supply, cable buses, cooling fans, etc. In the beginning of personal computers installing some new hardware in a computer could be a nightmare. Because some hardware configuration was necessary and we maybe need to change microswitches for example and we had to execute many configuration programs in order to make sure that hardware was installed in the correct way. Today most hardware devices and modern personal computers have operating systems and they include some form of plug and play capability that manage automatically most of the processes that you need to configure a new hardware to add to your personal computer. Manual hardware installation might sometimes be needed but this is only on occasions when you're using some specialized system that's running an operating system derived from unix for example like some linux distributions have. Computer hardware gets obsolete very fast and some material that is used in its manufacturing can be actually a big source of waste and pollution, so there are several initiatives nowadays to try to minimize or eliminate the impacts on the environment when we design, manufacture, use and dispose of computers and servers. So these initiatives are commonly known as green computing or green ICT, green IT or ICT sustainability. Whose objective is try to minimize and eliminate the impact of the environment when we dispose of all this hardware. So, as we indicated a computer is useless without a computer program that instructs it how to process data and manage devices. And computer software allows computers to perform specific tasks or applications. So in the computer basically there's two types of software, one is the system software and the other is the application software. So, the main systems software is the operating system and the operating system is loaded when we start up the computer and its objective is to manage the interaction of the user and all application software and the hardware devices on the computer. so, it will for example manage the access to files and orchestrate the execution of different programs. So the other type of software as compared to system software is application software. Application software is created to develop specific tasks and we can think for example of word-processing, creating spreadsheets, designing slide shows, calculating the stresses of a building or browsing the internet. So as we have describe with hardware, installing new software in the early days of computing could also be also be quite a tricky task but nowadays most software programs are just distributed with an automated tool that's called an installation wizard and it just guides user through an easy installation process. All computers basically work following and information processing cycle and this information processing cycle is known as IPOS which stands for Input Processing Output Storage as you can see here on the slide. So first what the computer needs is data to be entered into the system, this is the input. And for this the computer has specific hardware known as input devices that you use to input data into the system that is for example a keyboard or scanner. When this input is there the computer performs some operations on these data, which is the processing and this is done by the central processing unit or CPU. And after we finish all this processing the computer presents the results to the user and this is the output. And for this it also uses specific hardware which are called output devices, think for example of amonitor or a printer. So the result of all this processing can also be stored for future use and this is the storage part of the IPOS cycle and the storage just means saving, the data saving the programs or saving any other output for future use. For this the computer has permanent storage devices and you can think of the internal hard disk or removable disks, disks such as a pen drives. And the storage devices have input and output function. So in this video we have seen what is a computer and what are the most important components of it.

COMPUTER HISTORY
 In this video we will review the history of the first computers. Before the machines that we can consider real computers , there were some designs that can be considered the ancestors, Probably we should go back to the abacus as the first helper for the calculus but if we focus on automatic artifacts, the pascal calculator, the Pascaline, is one of the first antecedents. It was a machine that using gears could solve basic mathematical operations. Another relevant machine was the babbage's difference engine that was able to tabulate polynomial functions for example it could fill logarithmic or trigonometric tables that maybe you've used. But the first machine that we can consider a computer was the Z1, designed by Konrad Zusee in the period of the second world war. The only drawback was that it's was mechanical computer and it was never completed. On the other hand Alan Turing developed a set of devices focused on breaking the codes generated by the german army using the enigma machine. This machine was called the bombe and is considered the first electronic computer . the Mark I created by IBM was an electronic version of the Babagge analytic engine. In that period IBM thought that just 5 computers would be needed all over the world. The ENIAC was another relevant  machine,created by Pensylvania university and it is considered the first general-purpose computer. And finally in these historical review of the computers we have the univac.The univac was the first commercial computer that was sold to the United States census beureau. The technology used in computers has passed through four different generations: in the first one the technology was based on vacuum tubes that were huge devices like a bulb. In the second generation during the 50s, the technology was based on transistors and the third generation, that began the 60s, was marked by the appearance of integrated circuits, which incorporate all the components he in a small plate of silicon. Finally the fourth generation lasts until nowadays and it's based on the use of microchips, microprocessors. The current computers are like those ones, the only difference is that now they are just smaller and faster. and that's all thanks

TYPES OF COMPUTER
 So let's look at computer taxonomies, so we're going to look from supercomputers to smartphones. We start with supercomputers so we start by the ones that have the most calculus power, the fast and the most powerful ones, these are supercomputers. Supercomputers are designed for complex high demanding calculus tasks like weather forecasting or physical simulations or crypto analyzed, for example, and they're very expensive. His performance is measured in flops, it stands for floating point operations per second. After supercomputers comes mainframes. Mainframes are computers that are used by large organizations but are not as specific as supercomputers and they most are of the timer related with massive data applications, like bulk data processing, industrial customers statistics, enterprise resource planning, transaction processing and more recently big data analysis. For Insurance companies, banks, public administration and research centers to analyze big amounts of data to get information out of that. So modern mainframes can run multiple different instances of different operating systems at the same time and this is the technique of using a virtual machines and allows applications to run as if they were on physically separate computers. So while mainframes pioneered this capability this type of virtualization which virtual machines is now actually available on most families of computer systems although they do not always have the same degree of level of sophistication as mainframes. What a mainframe can also do is that they can add or hot swap system capacity without disrupting system function so they can do this with specific granularity that's not usually available in other servers solutions. So after mainframes come the servers and they are similar to mainframes because they act as a centralized resource provided with high capacity more than desktop computers. The main difference with servers and mainframes systems is that mainframes and oriented to running specific corporate applications like these big data analyis that we spoke about before. And servers are more focused on sharing resources so like for example are offering storage, internet access, computing power or database for users. So servers offer this services to the users. The range of prices and computing capacities is broader in servers than in mainframes, as you can find very small servers with capacities and prices thar are not much higher than that of a high end workstation and mainframes are always powerful and expensive computers. Nowadays most internet services are provided by servers that provides web pages, email, databases storage... So so both, mainframes and servers are multiuser machines, that this means that they attend simultaneously several users, typically much more than the number of processors they have. So, what they do in order to achieve that is to use a technique that's called time-sharing and we can see this a little bit here on the slide. Basically timeshare consists of assigning the resources of a machine to one user but during a very short period of time. So the computer changes this quickly among active users, and it does it so quickly that to the user it gives the sensation that the computers is only dedicated to the tasks they want the computer to do, but it's actually not true because it's doing the tasks of many users and this is what is called multitasking. So most of them have several processors working at the same time and access to shared memory devices and this is what is called multi-processing. So the processes can be assigned to solve different parts of the same tasks and this is called parallel processing. So in the particular case that a computer executes multiple computer processes in different processors it can also be called multiprogramming. So moving from multi-user computers like mainframes and servers to single user computers we have workstations. Workstations are at high end of personal computers and they have a very powerful CPU, lots of ram memory and most of times come with one or more high-capacity graphics cards. And workstations are typically dedicated to allow for 3D design, engineering simulation, medical imaging or stock market analysis for example. So then, after workstations, come personal personal computers that are more for the general public. It's a pc, that is most of the time a general purpose machine that is designed for end users They are relatively cheap and they are addressed to domestic and office users. So we do word processing, spreadsheets and presentations with them. So pc, a personal computers, suitable for mobile use is what we call a laptop or notebook. And actually what they do is integrate all the devices in a single body with low weight so we can carry around with it. And the weight can actually vary from three kilos and there are even laptops that are around one kilogram, and actually they are as powerful as desktop computers but their expansion capacities are more limited. So basically they might have less memory, smaller disks and maybe they don't have all the expansion ports that a desktop computer has. So over the last few years what we have seen as a replacement for laptops is tablets. So tablets are for users that have even more mobility requirements, because these tablets are very small, they have a touch screen and they have very intuitive interfaces that makes us able to use anywhere and actually. The fact that they are so intuitive and small has contributed to their fast introduction and many people are using tablets nowadays and they are even being introduced into reluctant sectors. So tablets focus mainly on media consumption, information retrieval, gaming and online communication. So finally we have smartphones. Smartphones are smaller than tablets They Are actually handheld computers and they were born as a fusion of cell phones and personal digital assistants PDA. And actually they allow us to be permanently connected and have a broad spectrum of applications that we can use, both from professional and productivity tools to education or leisure. So other mobile devices that we want to show are PDA, GPS or portable media players PMPs, because they are also small computers each with specific functions. So PDAs, as we've seen before it's a personal data assistant, it's a handheld device that was actually made to be a personal information manager. They were very popular at the late nineties and the early two thousands but they've disappeared mostly because actually ol their functionality has been incorporated into what we know nowadays as smartphones. But their main functions are to store personal information as telephone numbers, notes, photos, songs and any productivity tool, as a calculator for example, an appoinment book, a calendar. And provides communication tools like email and web access to connect to the Iternet. Then the other handheld computer are portable GPS devices, also handheld devices that are designed to give the user it's current position and can help him providing navigation tools. So also their function have been mostly integrated into what we nowadays know as smartphones and their popularity has dismished. But they still have a small niche of application for people that are trekking or car navigation. Finally we have PMPs or portable media players and these are also handheld devices that are designed to store and play audio, video and digital images in various formats. So they usually store the content in an internal flash memory or an external flash memory cards. And their function has also been integrated into modern smartphones so their presence in the market has also declined.

WEARABLE AND EMBEDDED COMPUTERS
 Wearable computing is a fast growing field and it includes actually all kinds of electronic computing devices that you can wear on the human body. So they're use in applications that require recording or sending data about a person's activity or health and can perform functions like.. well it can actually because of that contribute to persons health or helping workers do a job by providing computer power that it's integrated in the activities they do for their work. So for example you can think of wearable computers like headsets, watches you can put your wrist, jackets, eyeglasses or you could even have that implanted. So, as an example consider eyeglasses that you can wear at that have any integrated screen for a technician that wants to do maintenance and this glasses actually superimposed in real time manuals or images of the machine that this technician is fixing. Or think for example of an insulin pump that controls delivery of insulin to patients at intervals that the computer can detect it is necessary. Or think, for example, about a combat helmet for soldiers that can communicate with the command and gives him or her information about what can be found in the surroundings in the form of augmented reality that superimposes images captured by a camera in real life. So this is all examples of wearable computers. So finally there's also computers that are embedded into other big systems and this is called embedded computers. So this is when a small computer chips are inside another product or devices to control this devices or perform specific functions. So, nowadays actually they are in many electronic products and actually we use a lot of them on our daily lives. You can find them for example in your car, your car has dozens of embedded computers in there. And you can find them on household appliances like your microwave, your freezer, your dishwasher, they are in home heating and cooling devices. Like I said, cars and trains and airplanes and another vehicles and also in lifts and automatic scalators.



UPValenciaX: ISC101.2x Information Systems and Computer Applications, Part 2: Hardware