HI THIS BLOG IS ESPECIALLY FR THOSE GUYS LIKE ME WHO R TECHNO FREAKS .LOTS OF INFO ON THIS BLOG IN BOTH COMPUTER AND ELECTRONICS FIELD GO THROUGH IT AND ENJOY
Monday, June 14, 2010
Nanobots
Nanorobotics is the technology of creating machines or robots at or close to the microscopic scale of a nanometer (10−9 meters). More specifically, nanorobotics refers to the still largely hypothetical nanotechnology engineering discipline of designing and building nanorobots, devices ranging in size from 0.1-10 micrometers and constructed of nanoscale or molecular components. As no artificial non-biological nanorobots have yet been created, they remain a hypothetical concept. The names nanobots, nanoids, nanites or nanomites have also been used to describe these hypothetical devices.
Nanomachines are largely in the research-and-development phase but some primitive molecular machines have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, capable of counting specific molecules in a chemical sample. The first useful applications of nanomachines, if such are ever built, might be in medical technology, where they might be used to identify cancer cells and destroy them. Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Recently, Rice University has demonstrated a single-molecule car developed by a chemical process and includes buckyballs for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip.
There is a race among scientists and researchers to develop computers that operate on the molecular level using only a few atoms to convey information and do calculations. Binary technology with its simple 1/0 or on-off allows for technology of the most complex kinds to be represented by almost anything. From the race to make smaller more energy efficient computers two models have come to the fore; the electronic and the biological.
Each is still experimental. Each has potential advantages and disadvantages, neither is ready to replace your home computer yet. But as of the last few weeks due to some breakthroughs the advantage has shifted toward the biological models. What does this mean to the world of computers and society? That is what this paper will explore.
There has been a great deal of science-fiction and future projections of the reduction of computer size with component being of molecular size. The logic of such an approach can be seen in the results of component reduction in computers from the vacuum tube; to the transistor; to the integrated circuit; to the microprocessor computer chip.
With each reduction in size came an exponential increase in power and efficiency of computers. Today's desktop and laptop personal computers are light-years in advance of the most advanced computers of the early space age. In fact today's personal computers are several times more powerful than those used to place men on the moon a little more than thirty years ago.
Keep this in mind it is easy to understand why so much effort and research has gone into the development of computers which will work using components that are molecular in size. The hardware theoritians, physist, and chemist thought that it would be just a matter of extending current theory of microprocessor to its basic level and you would have a solution. In theory this would be simple enough, but the technology to work effectively with small groups of atoms in molecules designed for calculating and detecting using sub-atomic particles did not yet exist. The complications of quantum mechanics and exotic cutting-edge theory had brought many to believe that what once seemed very near on the horizon may be further off than imagined. Then some researchers thought of another approach.
Micro-biologist had examples of micro-machines with molecular components around them all the time. Single-celled animals and plants worked with components which operated through enzyme stimulation offered an interesting model. Living organisms offered examples of micro-computing that had been overlooked! If binary is either an "off" or an "on" these living things switched off and on by reacting to enzymes. Somewhere along the way someone figured that if a small scale biological reaction could be detected, living material could be used as a computer.
Earlier this year researchers successfully tested this theory using enzymes to cause DNA strands to calculate using binary. The strands opened and closed in response to external input. While this is far from the complex circuitry of a sophisticated computer, it is a breakthrough that may shift the whole focus of research.
The possibility of using organic components with enzyme stimulated responses has some interesting possiblities. Organics and their reactions are known and the material for them is easily available. This doesn't mean that this technology will be immediately available! An organic based computing system has several drawbacks. Living material would be susceptible to "infections" of a sort, and you can image that computer virus could be both organic and software based. Then comes the ethical question of animating organic material. If you design organic machines to react to stimuli, where does the line of creation versus builder come into play?
Then of course there are places where organic material just would not work. The nano-machines that would be a natural by-product of molecular computers would often have uses for which inorganics would be preferable. For many of the medical applications for which nano-machines driven by molecular computers would be designed, organic material would be a risky and unwise choice.
The race to create the molecular computer is not just a race of competing technologies, but and attempt to maximize the efficency, effectiveness, and usefulness of the computer. The first to successfully created a working molecular computer will be able to create the microscopic nano-machines that will be able to build and repair things at a fundamental level. They will create a new technology that will change everything from medicine to space exploration. Both technologies hold promise, have limits, and will have a profound impact on every aspect of society. If history is any indicator, the potential of this prospective new technology is beyond measure. Computers small enough to operate machines which could travel through your bloodstream, repair your body, or monitor the flow of oil in an engine, or fight disease. These would be some of many possibilities, the race is on!
Breeder Reactors
The fast breeder or fast breeder reactor (FBR) is a fast neutron reactor designed to breed fuel by producing more fissile material than it consumes. The FBR is one possible type of breeder reactor.
The reactors are used in nuclear power plants to produce nuclear power and nuclear fuel.
FBRs usually use a mixed oxide fuel core of up to 20% plutonium dioxide (PuO2) and at least 80% uranium dioxide (UO2). Another fuel option is metal alloys, typically a blend of uranium, plutonium, and zirconium. The plutonium used can be supplied by reprocessing reactor outputs or "off the shelf" from dismantled nuclear weapons.
In many FBR designs, the reactor core is surrounded in a blanket of tubes containing non-fissile uranium-238 which, by capturing fast neutrons from the reaction in the core, is partially converted to fissile plutonium-239 (as is some of the uranium in the core), which can then be reprocessed for use as nuclear fuel. Other FBR designs rely on the geometry of the fuel itself (which also contains uranium-238) to attain sufficient fast neutron capture.
The ratio between the Pu239 (or U235) fission cross-section and the U238 absorption cross-section is much higher in a thermal spectrum than in a fast spectrum. Therefore a higher enrichment of the fuel is needed in a fast reactor in order to reach a self-sustaining nuclear chain reaction.
Since a fast reactor uses a fast spectrum, no moderator is required to thermalize the fast neutrons.
All current fast reactor designs use liquid metal as the primary coolant, to transfer heat from the core to steam used to power the electricity generating turbines. Some early FBRs used mercury, and other experimental reactors have used NaK. Both of these choices have the advantage that they are liquids at room temperature, which is convenient for experimental rigs but less important for pilot or full scale power stations.
Sodium is the normal coolant for large power stations, but lead has been used successfully for smaller generating rigs. Both coolant choices are being studied as possible Generation IV reactors, and each presents some advantages.A gas-cooled option is also being studied, although no gas-cooled fast reactor has reached criticality.
Liquid water is an undesirable primary coolant for fast reactors because large amounts of water in the core are required to cool the reactor. Since water is a neutron moderator, this slows neutrons to thermal levels and prevents the breeding of uranium-238 into plutonium-239. Theoretical work has been done on reduced moderation water reactors, which may have a sufficiently fast spectrum to provide a breeding ratio slightly over 1. This would likely result in an unacceptable power derating and high costs in an LWR-derivative reactor, but the supercritical water coolant of the SCWR has sufficient heat capacity to allow adequate cooling with less water, making a fast-spectrum water cooled reactor a practical possibility. In addition, a heavy water moderated thermal breeder reactor, using thorium to produce uranium-233, is theoretically possible (see Advanced Heavy Water Reactor).
Under appropriate operating conditions, the neutrons given off by fission reactions can "breed" more fuel from otherwise non-fissionable isotopes. The most common breeding reaction is that of plutonium-239 from non-fissionable uranium-238. The term "fast breeder" refers to the types of configurations which can actually produce more fissionable fuel than they use, such as the LMFBR. This scenario is possible because the non-fissionable uranium-238 is 140 times more abundant than the fissionable U-235 and can be efficiently converted into Pu-239 by the neutrons from a fission chain reaction.
France has made the largest implementation of breeder reactors with its large Super-Phenix reactor and an intermediate scale reactor (BN-600) on the Caspian Sea for electric power and desalinization.
In the breeding of plutonium fuel in breeder reactors, an important concept is the breeding ratio, the amount of fissile plutonium-239 produced compared to the amount of fissionable fuel (like U-235) used to produced it. In the liquid-metal, fast-breeder reactor (LMFBR), the target breeding ratio is 1.4 but the results achieved have been about 1.2 . This is based on 2.4 neutrons produced per U-235 fission, with one neutron used to sustain the reaction.
The time required for a breeder reactor to produce enough material to fuel a second reactor is called its doubling time, and present design plans target about ten years as a doubling time. A reactor could use the heat of the reaction to produce energy for 10 years, and at the end of that time have enough fuel to fuel another reactor for 10 years.
Several countries are developing more proliferation resistant reprocessing methods that don't separate the plutonium from the other actinides. For instance, the pyrometallurgical process when used to reprocess fuel from the Integral Fast Reactor leaves large amounts of radioactive actinides in the reactor fuel. Removing these transuranics in a conventional reprocessing plant would be extremely difficult as many of the actinides emit strong neutron radiation, requiring all handling of the material to be done remotely, thus preventing the plutonium from being used for bombs while still being usable as reactor fuel.
Thorium fueled reactors may pose a slightly higher proliferation risk than uranium based reactors. The reason for this is that while Pu-239 will fairly often fail to undergo fission on neutron capture, producing Pu-240, the corresponding process in the thorium cycle is relatively rare. Thorium-232 converts to U-233, which will almost always undergo fission successfully, meaning that there will be very little U-234 produced in the reactor's thorium/U-233 breeder blanket, and the resulting pure U-233 will be comparatively easy to extract and use for weapons. However, U-233 is normally accompanied by U-232 (produced in neutron knock-off reactions), which has the strong gamma emitter Tl-208 in its decay chain. These gamma rays complicate the safe handling of a weapon and the design of its electronics, which is why U-233 has never been pursued for weapons beyond proof-of-concept demonstrations. One proposed solution to this is to mix natural or depleted uranium into the thorium breeder blanket. When diluted with enough U-238, the resulting uranium mixture would no longer be weapons usable, but significant quantities of plutonium would also be produced.
THE LARGE HADRON COLLIDER
The Large Hadron Collider (LHC) is a gigantic scientific instrument near Geneva, where it spans the border between Switzerland and France about 100 m underground. It is a particle accelerator used by physicists to study the smallest known particles – the fundamental building blocks of all things. It will revolutionise our understanding, from the minuscule world deep within atoms to the vastness of the Universe.
Two beams of subatomic particles called 'hadrons' – either protons or lead ions – will travel in opposite directions inside the circular accelerator, gaining energy with every lap. Physicists will use the LHC to recreate the conditions just after the Big Bang, by colliding the two beams head-on at very high energy. Teams of physicists from around the world will analyse the particles created in the collisions using special detectors in a number of experiments dedicated to the LHC.
There are many theories as to what will result from these collisions, but what's for sure is that a brave new world of physics will emerge from the new accelerator, as knowledge in particle physics goes on to describe the workings of the Universe. For decades, the Standard Model of particle physics has served physicists well as a means of understanding the fundamental laws of Nature, but it does not tell the whole story. Only experimental data using the higher energies reached by the LHC can push knowledge forward, challenging those who seek confirmation of established knowledge, and those who dare to dream beyond the paradigm.
The collider tunnel contains two adjacent parallel beam pipes that intersect at four points, each containing a proton beam, which travel in opposite directions around the ring. Some 1,232 dipole magnets keep the beams on their circular path, while an additional 392 quadrupole magnets are used to keep the beams focused, in order to maximize the chances of interaction between the particles in the four intersection points, where the two beams will cross. In total, over 1,600 superconducting magnets are installed, with most weighing over 27 tonnes. Approximately 96 tonnes of liquid helium is needed to keep the magnets at their operating temperature of 1.9 K (−271.25 °C), making the LHC the largest cryogenic facility in the world at liquid helium temperature.
Physicists hope that the LHC will help answer many of the most fundamental questions in physics: questions concerning the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, especially regarding the intersection of quantum mechanics and general relativity, where current theories and knowledge are unclear or break down altogether. These issues include, at least:
Is the Higgs mechanism for generating elementary particle masses via electroweak symmetry breaking indeed realised in nature?It is anticipated that the collider will either demonstrate or rule out the existence of the elusive Higgs boson(s), completing (or refuting) the Standard Model.
Is supersymmetry, an extension of the Standard Model and Poincaré symmetry, realised in nature, implying that all known particles have supersymmetric partners?
Are there extra dimensions,as predicted by various models inspired by string theory, and can we detect them?
What is the nature of the Dark Matter which appears to account for 23% of the mass of the Universe?
Other questions are:
Are electromagnetism, the strong nuclear force and the weak nuclear force just different manifestations of a single unified force, as predicted by various Grand Unification Theories?
Why is gravity so many orders of magnitude weaker than the other three fundamental forces?
Are there additional sources of quark flavours, beyond those already predicted within the Standard Model?
Why are there apparent violations of the symmetry between matter and antimatter
What was the nature of the quark-gluon plasma in the early universe? This will be investigated by ion collisions in ALICE.
Cost
With a budget of 9 billion US dollars (approx. €7.5bn or £6.19bn as of Jun 2010), the LHC is one of the most expensive scientific instruments ever built.The total cost of the project is expected to be of the order of 4.6bn Swiss francs (approx. $4.4bn, €3.1bn, or £2.8bn as of Jan 2010) for the accelerator and SFr 1.16bn (approx. $1.1bn, €0.8bn, or £0.7bn as of Jan 2010) for the CERN contribution to the experiments.
Blue screen error in case of windows
Blue screen crash will display all of your pending and an error screen that suddenly come to light that informs you that your screen is frozen. In addition, some errors DLL will also appear and ask if you recently added any new hardware or programs on your computer.
When this incident occurs, they have gone mad, especially if you have not saved or finish something you're working. Of course, you have to do your job again from the beginning. You would be really upset and angry, no? So what can you do to fix blue screen error like this?
The easiest and fastest way to solve this problem is to run a registry cleaner software. And if you've decided to let a technician fix your computer for you, stop there, you do not want to pay for something you can fix yourself, right?
Also, if you want to ask about any technician may be able to resolve the accident, which will undoubtedly tell you that it is a very complex matter and therefore should be treated by a professional like him. If the technician is not to say there will be no business left for him.
But in reality, what is going to do is just run a registry cleaner in the drive and then asks for the payment of reparations and the software used. Furthermore, unloading, installation and repair of the blue screen error using a software is not very difficult to do and that saves time and, well, why would you pay someone else just to make something simple for you?
Here are some things you can do to fix and solve your blue screen crash. This is especially useful if you're not familiar with the technical team and just want quick and simple solutions.
1. Restart your computer
Sometimes the blue screen comes out only once and then not again for a second time. In this case, a simple restart is just what you need to resolve the blue screen crash.
On the other hand, if the problem is more regular, you may have to deal with other things.
2. Replace your random access memory (RAM)
RAM is a hardware that is permanently installed inside the computer's motherboard. It is a rectangular piece that holds the information. This is where the CPU gets the instructions. And once this equipment is not functioning normally, without doubt, the computer crash in the blue screen. And so, the replacement of the RAM can be a solution.
3. Run a registry cleaner
A registry cleaner can eliminate errors in your record. This record stores all information about the software, drivers, applications and games that are installed on your computer. And in time, it can get bad, because some obsolete entries are not deleted and can lead to what they call the "blue screen of death."
By using a registry cleaner, the blue screen error can be eradicated and increases the speed of your computer in the process. In addition, the blue screen of death will not happen if you regularly check and care for your computer system.
PWM CONTROLLER IC 3525
The function of each pin of the IC3525A:
Pin 1: It takes the reference input and compares it with the feed back.
Pin 2: It takes the feedback from the output and compares it with the reference input.
Pin 3: This pin is not used in the circuit it is used to cascade this pin with other devices.
Ă˜Pin 4: OSC output is used to get an oscillator frequency.
Pin 5 and Pin 6: There are used to select the operating frequency of the whole circuit.
Pin no 7: This is the discharge pin and it is used to set the delay between the pulses.
Pin no 8: This is the soft start pin and it allows the device to start slowly and gradually.
Pin no 9: This pin provides closed loop frequency compensation to the circuit.
Pin no 10: This is the shutdown pin and it pulls the IC to shutdown in case of hazardous
circumstances.
Pin no 11 and 14: They provide the output from the IC in the form of the pulses.
Pin no 12 and 15: This is supply and ground pin of I.C.
Pin no 16: It provides the reference voltage for the pin no 2.
The SG3525A, pulse width modulator control circuits offer improved
performance and lower external parts count when implemented for controlling all types of
switching power supplies. The on–chip +5.1 V reference is trimmed to ±1% and the error
amplifier has an input common–mode voltage range that includes the reference voltage, thus
eliminating the need for external divider resistors. A sync input to the oscillator enables multiple
units to be slaved or a single unit to be synchronized to an external system clock. A wide range
of dead time can be programmed by a single resistor connected between the CT and Discharge
pins. These devices also feature built–in soft–start circuitry, requiring only an external timing
capacitor.
A shutdown pin controls both the soft–start circuitry and the output stages, providing instantaneous turn off through the PWM latch with pulsed shutdown, as well as soft–
start recycle with longer shutdown commands. The under voltage lockout inhibits the outputs
and the changing of the soft start capacitor when VCC is below nominal. The output stages are
totem–pole design capable of sinking and sourcing in excess of 200 mA. The output stage of the
SG3525A features NOR logic resulting in a low output for an off–state while the SG3527A
utilized OR logic which gives a high output when off.
Features of 3525A
8.0 V to 35 V Operation
5.1 V ± 1.0% Trimmed Reference
100 Hz to 400 kHz Oscillator Range
Separate Oscillator Sync Pin
Adjustable Dead time Control
Input Under voltage Lockout
Latching PWM to Prevent Multiple Pulses
Pulse–by–Pulse Shutdown
Dual Source/Sink Outputs: ±400 mA Peak
Pin 1: It takes the reference input and compares it with the feed back.
Pin 2: It takes the feedback from the output and compares it with the reference input.
Pin 3: This pin is not used in the circuit it is used to cascade this pin with other devices.
Ă˜Pin 4: OSC output is used to get an oscillator frequency.
Pin 5 and Pin 6: There are used to select the operating frequency of the whole circuit.
Pin no 7: This is the discharge pin and it is used to set the delay between the pulses.
Pin no 8: This is the soft start pin and it allows the device to start slowly and gradually.
Pin no 9: This pin provides closed loop frequency compensation to the circuit.
Pin no 10: This is the shutdown pin and it pulls the IC to shutdown in case of hazardous
circumstances.
Pin no 11 and 14: They provide the output from the IC in the form of the pulses.
Pin no 12 and 15: This is supply and ground pin of I.C.
Pin no 16: It provides the reference voltage for the pin no 2.
The SG3525A, pulse width modulator control circuits offer improved
performance and lower external parts count when implemented for controlling all types of
switching power supplies. The on–chip +5.1 V reference is trimmed to ±1% and the error
amplifier has an input common–mode voltage range that includes the reference voltage, thus
eliminating the need for external divider resistors. A sync input to the oscillator enables multiple
units to be slaved or a single unit to be synchronized to an external system clock. A wide range
of dead time can be programmed by a single resistor connected between the CT and Discharge
pins. These devices also feature built–in soft–start circuitry, requiring only an external timing
capacitor.
A shutdown pin controls both the soft–start circuitry and the output stages, providing instantaneous turn off through the PWM latch with pulsed shutdown, as well as soft–
start recycle with longer shutdown commands. The under voltage lockout inhibits the outputs
and the changing of the soft start capacitor when VCC is below nominal. The output stages are
totem–pole design capable of sinking and sourcing in excess of 200 mA. The output stage of the
SG3525A features NOR logic resulting in a low output for an off–state while the SG3527A
utilized OR logic which gives a high output when off.
Features of 3525A
8.0 V to 35 V Operation
5.1 V ± 1.0% Trimmed Reference
100 Hz to 400 kHz Oscillator Range
Separate Oscillator Sync Pin
Adjustable Dead time Control
Input Under voltage Lockout
Latching PWM to Prevent Multiple Pulses
Pulse–by–Pulse Shutdown
Dual Source/Sink Outputs: ±400 mA Peak
NEED FOR A DC-DC CONVERTER
The electric power is not normally used in a form in which it is produced or
distributed. Practically all electronic systems require some form of power conversion.A device that transfers electric energy from a given source to a given load using electronic circuits is referred to as power supply (although ”Power converter” is a more accurate term for such a
device).
2.1 UnderstandingWhen the output voltage set point is less than the input voltage, such regulator is
called a Buck converter. When the output voltage set point is higher, it is a Boost converter. A feedback input is necessary for the regulator to know the state of the output voltage so that it can be kept with in the tolerances required by the power supply design requirements. The converters control the output voltage to the specifications by comparing the output voltage (or current or (both) to an internal reference.
In case of a Linear regulator the power is transferred continuously from Vin to
Vout. In case of a Switching regulator the power is transferred from Vin to Vout in bursts. There are two main types of the switching regulators - inductive and charge pump (capacitive). Not every electronic system needs a regulator. The electronics in a typical system can operate within a narrow band (5% or 10%) around their rated voltage. The battery output voltage declines as the battery discharges. To prolong the usable life of the system, one could use electronics that operate at voltages toward the low end of the battery discharge. But, then the fresh battery
voltage would far exceed the upper tolerance of the electronics. If the electronics were to be chosen for the upper end of battery voltage, then the battery would soon discharge to the lower tolerance of the electronics.
One way to address this issue is wider range electronics, but this could be an
expensive proposition. Another way is to use a regulator. If the battery voltage range is narrow
(e.g. from NiCd cells), a low-dropout linear regulator may be suitable to produce a regulated lower output voltage. If the system voltage is higher than the battery voltage range, or within the range, then a switching regulator in a boost or buck-boost configuration can be used. Direct current-to-direct current (DC/DC) converters with faster switching frequencies are becoming popular due to their ability to decrease the size of the output capacitor and inductor to save board
space. On the other hand, the demands from the point-of-load (POL) power supply increase as processor core voltage drops below 1V, making lower voltages difficult to achieve at faster frequencies due to the lower duty cycle.
Many power IC suppliers are aggressively marketing faster DC/DC converters
that claim to save space. A DC/DC converter switching at 1 or 2 MHz sounds like a great idea,but there is more to understand about the impact to the power supply system than size and efficiency. Several design examples will be shown revealing the benefits and obstacles when switching at faster frequencies.
distributed. Practically all electronic systems require some form of power conversion.A device that transfers electric energy from a given source to a given load using electronic circuits is referred to as power supply (although ”Power converter” is a more accurate term for such a
device).
2.1 UnderstandingWhen the output voltage set point is less than the input voltage, such regulator is
called a Buck converter. When the output voltage set point is higher, it is a Boost converter. A feedback input is necessary for the regulator to know the state of the output voltage so that it can be kept with in the tolerances required by the power supply design requirements. The converters control the output voltage to the specifications by comparing the output voltage (or current or (both) to an internal reference.
In case of a Linear regulator the power is transferred continuously from Vin to
Vout. In case of a Switching regulator the power is transferred from Vin to Vout in bursts. There are two main types of the switching regulators - inductive and charge pump (capacitive). Not every electronic system needs a regulator. The electronics in a typical system can operate within a narrow band (5% or 10%) around their rated voltage. The battery output voltage declines as the battery discharges. To prolong the usable life of the system, one could use electronics that operate at voltages toward the low end of the battery discharge. But, then the fresh battery
voltage would far exceed the upper tolerance of the electronics. If the electronics were to be chosen for the upper end of battery voltage, then the battery would soon discharge to the lower tolerance of the electronics.
One way to address this issue is wider range electronics, but this could be an
expensive proposition. Another way is to use a regulator. If the battery voltage range is narrow
(e.g. from NiCd cells), a low-dropout linear regulator may be suitable to produce a regulated lower output voltage. If the system voltage is higher than the battery voltage range, or within the range, then a switching regulator in a boost or buck-boost configuration can be used. Direct current-to-direct current (DC/DC) converters with faster switching frequencies are becoming popular due to their ability to decrease the size of the output capacitor and inductor to save board
space. On the other hand, the demands from the point-of-load (POL) power supply increase as processor core voltage drops below 1V, making lower voltages difficult to achieve at faster frequencies due to the lower duty cycle.
Many power IC suppliers are aggressively marketing faster DC/DC converters
that claim to save space. A DC/DC converter switching at 1 or 2 MHz sounds like a great idea,but there is more to understand about the impact to the power supply system than size and efficiency. Several design examples will be shown revealing the benefits and obstacles when switching at faster frequencies.
SURFACE COMPUTER
Simply put what is it that we can understand from the term surface computer.It is very obvious that it does not mean a computer with a surface. then what?Any guesses?Well so here is the dough.
A surface computer is a computer that interacts with the user through the surface of an ordinary object rather than through the monitor or keyboard.
We all know about the touch screen phones that we use.Do we tend to use a keyboard with it?
No we directly press our fingers on the screen.This could be wery well considered a good example of a surface computer.We can easily move images present on the screen using just our fingers and nothing else.thats the cool part about a surface computer.
There are many companies in the consumer electronics market dealing with surface computers like Microsoft (its founder with code name Milan),Mitsubishi Electric with its Diamond touch etc.
Surface computing involves the use of a specialised GUI(graphic user interface) in which traditional GUI elements are replaced by intuitive everyday objects.It gives us an hand on experience of every day object manipulation.
The surface computer also can make use of the same processors like intel core2duo 2.93ghz E7500 with 3 gb ram etc.The whole computer becomes very compact and handy but at the same time the cost of this unit increases considerably when compared to a normal computer.
But then again it is worth the cost believe me.
A surface computer is a computer that interacts with the user through the surface of an ordinary object rather than through the monitor or keyboard.
We all know about the touch screen phones that we use.Do we tend to use a keyboard with it?
No we directly press our fingers on the screen.This could be wery well considered a good example of a surface computer.We can easily move images present on the screen using just our fingers and nothing else.thats the cool part about a surface computer.
There are many companies in the consumer electronics market dealing with surface computers like Microsoft (its founder with code name Milan),Mitsubishi Electric with its Diamond touch etc.
Surface computing involves the use of a specialised GUI(graphic user interface) in which traditional GUI elements are replaced by intuitive everyday objects.It gives us an hand on experience of every day object manipulation.
The surface computer also can make use of the same processors like intel core2duo 2.93ghz E7500 with 3 gb ram etc.The whole computer becomes very compact and handy but at the same time the cost of this unit increases considerably when compared to a normal computer.
But then again it is worth the cost believe me.
Subscribe to:
Posts (Atom)