Friday, October 19, 2018

What Is Random Access Memory (RAM)?


What is RAM?People always say you should upgrade your RAM.

But what is it, anyways?

how to install ram pc buildout 2016 720x480 02
Random Access Memory, or RAM (pronounced as ramm), is the physical hardware inside a computer that temporarily stores data, serving as the computer's "working" memory.
Additional RAM allows a computer to work with more information at the same time, which usually has a dramatic effect on total system performance.
Some popular manufacturers of RAM include KingstonPNYCrucial Technology, and Corsair.
It's also known as 
  • main memory
  • internal memory
  • primary storage
  • primary memory
  • memory "stick",
  • RAM "stick".


Your Computer Needs RAM to Use Data Quickly

The purpose of RAM is to provide quick read and write access to a storage device. Your computer uses RAM to load data because it's much quicker than running that same data directly off of a hard drive.
Think of RAM like an office desk. A desk is used for quick access to important documents, writing tools, and other items that you need right now. Without a desk, you'd keep everything stored in drawers and filing cabinets, meaning it would take much longer to do your everyday tasks since you would have to constantly reach into these storage compartments to get what you need, and then spend additional time putting them away.
Similarly, all the data you're actively using on your computer (or smartphone, tablet, etc.) is temporarily stored in RAM. This type of memory, like a desk in the analogy, provides much faster read/write times than using a hard drive. Most hard drives are considerably slower than RAM due to physical limitations like rotation speed.
How Much RAM Do You Need?

Just like with a CPU and hard drive, the amount of memory you need for your computer depends entirely on what you use, or plan to use, your computer for.

For example, if you're buying a computer for heavy gaming, then you'll want enough RAM to support smooth gameplay. Having just 2 GB of RAM available for a game that recommends at least 4 GB is going to result in very slow performance if not total inability to play your games.

Image result for pc gaming

On the other end of the spectrum, if you use your computer for light internet browsing and no video streaming, games, memory-intensive applications, etc., you could easily get away with less memory.

The same goes for video editing applications, programs that are heavy on 3D graphics, etc. You can normally find out before you buy a computer just how much RAM a specific program or game will require, often listed in a "system requirements" area of the website or product box.

It would be hard to find a new desktop, laptop, or even tablet that comes with less than 2 to 4 GB of RAM pre-installed. Unless you have a specific purpose for your computer apart from regular video streaming, internet browsing, and normal application use, you probably don't need to buy a computer that has any more RAM than that.



Source : https://www.lifewire.com/what-is-random-access-memory-ram-2618159

Blood Pressure and How does a Automatic blood pressure gauge (sphygmomanometer) work?


Do you ever checked and knows your Blood Pressure?

Have you ever used the device which measure you blood pressure?

Do you ever thought of how it works? Lets Explore! 
Image result for (sphygmomanometer)

Our heart is an amazing pump!! WHY IS THAT??? It works reliably for decades, and it safely pumps blood which is one of the trickest liquids around. Lets put it in simple words, our blood vessels are PIPES. They take the output from the pump and distribute it throughout the body. 

THEREFORE, A blood pressure gauge is simply a way to measure the performance of the pump and the pipes.

Image result for blood pressure pictures

There are two numbers in a blood pressure reading: systolic and diastolic. For example, a typical reading might be 120/80. When the doctor puts the cuff around your arm and pumps it up, what he/she is doing is cutting off the blood flow with the pressure exerted by the cuff. As the pressure in the cuff is released, blood starts flowing again and the doctor can hear the flow in the stethoscope. The number at which blood starts flowing (120) is the measure of the maximum output pressure of the heart (systolic reading). The doctor continues releasing the pressure on the cuff and listens until there is no sound. That number (80) indicates the pressure in the system when the heart is relaxed (diastolic reading).

Image result for blood pressure
If the numbers are too high, such caused by ; 

  • the heart is having to work too hard because of restrictions in the pipes. 
  • Certain hormones, like adrenaline (which is released when you are under stress) cause certain blood vessels to constrict, and this raises your blood pressure 
  • if you are under constant stress, your blood pressure goes up, and it means that your heart has to work too hard. 
  • Other things that can increase the blood pressure include deposits in the pipes and a loss of elasticity as the blood vessels age.


High blood pressure can cause the heart to fail (from working too hard), or it can cause kidney failure (from too much pressure)


Image result for blood pressure


Automatic Cuffs The Tech replaced the Manual gauge that has been used for ages!

Automatic blood pressure cuffs run on either electricity or battery power and have a digital screen that displays the blood pressure reading. Automatic cuffs work on a the same principle as the manual cuffs -- they inflate to cut off blood flow, then slowly release and register the points at which the pulse starts and stops. Once the test is complete, the unit displays both the systolic and diastolic number on the screen. Some automatic cuffs also register pulse rate and have a fail-safe that warns if the cuff is not in place. An automatic cuff is much easier to use but may also provide slightly different readings.



In contrast with manual cuff gauge, does this device measured accurately??Share of us some of your experience using one! 😼


Source :  https://science.howstuffworks.com/innovation/everyday-innovations/question146.htm

Thursday, October 18, 2018

ROLE OF INFORMATION TECHNOLOGY IN MEDICAL SCIENCE



Image result for it in medical field

Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise. Today information technology is used in wide range of fields and one of the upcoming fields is of Medical Science, which is known as Health Information Technology (HIT).

Health information technology (HIT) is the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making. HIT, technology represents computers and communications attributes that can be networked to build systems for moving health information. 

Image result for it in medical field

LET'S LEARN A LIL BIT OF HISTORY SHALL WE???


1949   -     Gustav Wagner established the first professional organization for health informatics in Germany.

1950s  -    The rise of the computers. Worldwide use of computer technology in medicine began in this era. 


Health informatics also called Health Information Systems is a discipline at the intersection of information science, computer science, and health care. It concerns with the resources, devices, and methods required for optimizing the acquisition, storage, retrieval, and use of information in health and biomedicine.

Health informatics tools include computers, clinical guidelines, formal medical terminologies, and information and communication systems. It is applied to the areas of nursing, clinical care, dentistry, pharmacy, public health, occupational therapy, and (bio)medical research.


The SunDaily Article on Pharmacy Information Management System 

There is no argument over the influence of IT in medicine and education. But there are still many areas which need to be improved before we could utilise IT to its full extent. Last but not the least, however advanced the technology gets, it can never replace the interaction the doctors and students require with the patient and the clinical judgments which make great doctors. So, in the pursuit of modern technologies, we should be careful that the doctor patient relationships do not get overlooked.

What is a QR Code?

Image result for qr code inventor
Denso Wave : The Inventor od QR Code


What is a QR Code? 

  • It is a two-dimensional square barcode which can store encoded data. 
  • Most of the time the data is a link to a website (URL).

Introduction

Image result for qr code
In this era of digital, QR Codes are best seen on advertisement, flyers, posters, magazines, and others that can easily spot these two-dimensional barcodes around you. QR Codes let you interact with the world using your smartphone.

Specifically, a QR Code extends the data at disposal on any physical object and create a digital extent to marketing operations. This technology enables and speeds up the use of mobile web services: it is a very creative digital tool.

Image result for qr code phone

Interactive actions


When scanning a QR Code using smartphone, you get an immediate access to its content. The QR Code reader can then carry an action, like opening your web browser to a specific URL. Other actions can be triggered, like storing a business card in your smartphone's contact list or connecting to a wireless network.

History

Image result for Denso Wave 
QR Codes were created in 1994 by Denso Wave, a Japanese subsidiary in the Toyota Group. The use of this technology is now free. The QR Code is not the only two-dimensional barcode in market, another example is the Data Matrix code.
QR Code is the most famous 2D barcode in the world. It has gained its success in Japan since the 2000s where he is now a standard. In 2011, an average of 5 QR Codes were scanned daily by each Japanese - more than the average number of SMS sent!
In 2010 QR Codes started to expand in the USA then in Europe where they can notably be seen in advertisements.















Wednesday, October 10, 2018

What is Artificial Intelligence exactly?

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.
Max Tegmark, President of the Future of Life Institute

What is exactly AI in an easy way to understand its existence???

THE ANSWER?
Commonly AI or Artificial Intelligence is a machine or a computer program that learnt how to do tasks that required forms of INTELLIGENCES that are usually done by human. And to take it another way, intelligence comes in many different aspects. We have many type of AI's that are good in particular subsets of intelligence.
Below are the examples of mentioned ;



Starting from the creation of AI's like SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to autonomous weapons.
Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Artificial Intelligence History

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.
Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.
This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.
While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.
1950s–1970s : Neural Networks
Early work with neural networks stirs excitement for “thinking machines.”
1980s–2010s : Machine learning
Machine learning becomes popular.
Present Day  : Deep Learning
Deep learning breakthroughs drive AI boom.
AI myths

The Advantages for Artificial Intelligence (AI)

  • Jobs - depending on the level and type of intelligence these machines receive in the future, it will obviously have an effect on the type of work they can do, and how well they can do it (they can become more efficient). As the level of AI increases so will their competency to deal with difficult, complex even dangerous tasks that are currently done by humans, a form of applied artificial intelligence.
  • Increase Our Technological Growth Rate - following on from the point above, AI will potentially help us 'open doors' into new and more advanced technological breakthroughs. For instance, due to their ability to produce millions and millions of computer modelling programs also with high degrees of accuracy, machines could essentially help us to find and understand new chemical elements and compounds etc. Basically, a very realistic advantage AI could propose is to act as a sort of catalyst for further technological & scientific discovery.
  • They don't stop - as they are machines there is no need for sleep, they don't get ill , there is no need for breaks or Facebook, they are able to go, go, go! There obviously may be the need for them to be charged or refueled, however the point is, they are definitely going to get a lot more work done than we can. All that is required is that they have some energy source.
  • No risk of harm - when we are exploring new undiscovered land or even planets, when a machine gets broken or dies, there is no harm done as they don't feel, they don't have emotions. Where as going on the same type of expeditions a machine does, may simply not be possible or they are exposing themselves to high risk situations.
  • Act as aids - they can act as  24/7 aids to children with disabilities or the elderly, they could even act as a source for learning and teaching. They could even be part of security alerting you to possible fires that you are in threat of, or fending off crime.
  • Their function is almost limitless - as the machines will be able to do everything (but just better) essentially their use, pretty much doesn't have any boundaries. They will make fewer mistakes, they are emotionless, they are more efficient, they are basically giving us more  free time to do as we please.

The Disadvantages for Artificial Intelligence (AI)

  • Over reliance on AI - as you may have seen in many films such as The Matrix, iRobot or even kids films such as WALL.E, if we rely on machines to do almost everything for us -- we have become so dependent, that if they were to simply shut down (or even decide they want to give up this working gig) they have the potential to ruin our economy and effectively our lives. Although the films are essentially just fiction, they still present a real possibility if we become too heavily dependent on machines. It wouldn't be too smart on our part not to have some sort of back up plan to potential issues that could arise, if the machines 'got real smart'.
  • Human Feel - as they are are machines they obviously can't provide you with that 'human touch and quality', the feeling of a togetherness and emotional understanding, that machines will lack the ability to sympathise and empathise with your situations, and may act irrationally as a consequence.

  • Inferior - as machines will be able to perform almost every task better than us in practically all respects, they will take up many of our jobs, which will then result in masses of people who are then jobless and as a result feel essentially useless. This could then lead us to issues of mental illness and obesity problems etc.
  • Misuse - there is no doubt that this level of technology in the wrong hands can cause mass destruction, where robot armies could be formed, or they could perhaps malfunction or be corrupted which then we could be facing a similar scene to that of terminator ( hey, you never know).
  • Ethically Wrong? - People say that the gift of intuition and intelligence was God's gift to mankind, and so to replicate that would be then to kind of 'play God'. Therefore not right to even attempt to clone our intelligence.

HOW CAN AI BE DANGEROUS?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, . When considering how AI might become a risk, experts think two scenarios most likely:

The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could  lead to an AI war that also results in mass casualties. To avoid being misused by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.