Someday I want to have a solar powered home, and I've been gathering supplies for it. I have about 710 Amp Hours worth of batteries, but I keep shying away from buying the panels, which tend to run in the hundreds of dollars. Obviously, this dream will have to wait.
Or will it? An electronics engineer has created a system to automatically switch between a solar-battery system and mains power. Why? The solar-battery system is free to you once paid for, but it can run down, especially if there's multiple cloudy days in a row or if your need for electricity is high (especially for air conditioning in hot summer months...my home city is technically a swamp.)
The system periodically takes a read of the battery capacity. If it's full it swaps the entire house over to battery power. If it's half full, it starts swapping circuits back to mains. And if it's empty...mains for you while the system recharges. Once set up, the user of this system could get the most economical power possible at all times.
One issue -- there is a brief cutoff while it swaps electrical systems. Users of lamps and refrigerators probably won't notice. Users of computers would have theirs inexplicably reboot. A UPS (uninterruptable power supply) would be required for all computers in this house. Preferably with the alarms turned off, as they by default sound an annoying alarm every time there's any issues with the power whatsoever.
Showing posts with label Electronics. Show all posts
Showing posts with label Electronics. Show all posts
Monday, July 30, 2012
Monday, July 23, 2012
Counterfan
As an IT expert, I am surrounded at all times by whirring fans, except when I am driving, in which case rumbling motors. These fans are to keep the electronics cool enough to function. While there are replacements, such as watercooled systems, that are significantly quieter, such systems are always significantly more expensive.
However, sound is ultimately a waveform, and interferes with itself destructively. A +1 and a -1 waveform will, when put together, combine to form 0. You can buy noise cancelling headphones that work using basically such a principle -- a microphone records the current sound, a microchip inverts the signal, and this is played into the headphones, cancelling the current sound. This gave me an idea to make computers quieter.
Instead of one case fan whirring away, there would be two rotating in opposite directions. The noise they produce would have opposite waveforms, cancelling each other and making the computer very quiet indeed. A small bubble of space between them would have higher than normal pressure, and a vent would be requires to shove this air out of the way. The case would then have negative pressure, and slowly suck air from the room. Filters would be required in the case to prevent dust buildup on the electronics, which is somewhat of a pain in the neck to clean. (Dust interferes with thermal transfer.)
However, sound is ultimately a waveform, and interferes with itself destructively. A +1 and a -1 waveform will, when put together, combine to form 0. You can buy noise cancelling headphones that work using basically such a principle -- a microphone records the current sound, a microchip inverts the signal, and this is played into the headphones, cancelling the current sound. This gave me an idea to make computers quieter.
Instead of one case fan whirring away, there would be two rotating in opposite directions. The noise they produce would have opposite waveforms, cancelling each other and making the computer very quiet indeed. A small bubble of space between them would have higher than normal pressure, and a vent would be requires to shove this air out of the way. The case would then have negative pressure, and slowly suck air from the room. Filters would be required in the case to prevent dust buildup on the electronics, which is somewhat of a pain in the neck to clean. (Dust interferes with thermal transfer.)
Saturday, December 31, 2011
Yarn Computer
The computers that I use on a daily basis are made of, effectively, sand and copper. Every computer, every electronic thing I have ever used, or touched, has followed this pattern. The one non-traditional computer I have seen to date was a pure electronic relay computer, which used electrical switches with no silicon. The reason this pattern is not used is that it's inefficient, impossibly loud, slow, and expensive. And then there's today's strange technology:
Via Slashdot, an international team of scientists have made circuits out of, surprisingly, yarn wires. The yarn is threaded with electrical conducting materials, such as copper, and woven into various electrical switches. Additional yarn can be added to weave the item into a piece of clothing, thereby achieving the long time goal of wearable computers, in this case, computers that are literally clothing.
There are some minor downsides to the current state of technology. No, it won't shock or electrocute you, but it's currently at the inefficient, impractical, slow, and expensive state that the relay computer that I linked at the begining of this article. Much R&D is required before you'll be able to, say, use a sweater as a GPS unit, a sock to monitor your vital signs, or anything of the other wondrous potential of these technologies. Technology sometimes has to crawl before it can walk, and walk before it can run.
Sunday, December 26, 2010
Auto Gaming
I think I heard this story a while ago, but now is the first time I actually remembered the address. An electronics engineer made a small device that automatically plays the game Guitar Hero, scoring better than any human ever possibly could, because it never makes mistakes.
Guitar hero is a "rhythym game," in which players use a plastic replica of an electric guitar to press buttons as they appear on a scrolling musical scale, in tribute to Bach's famous quote about musicianship: “I just press the right keys (buttons) at the right time and the organ does the rest. ” The guitar vaugely resembles the actual action of playing a guitar, in which a guitarist holds down some of the strings to change the effective length, and thus the pitch, of the vibrating string. Players of Guitar hero have about two seconds head-warning before they need to press the respective button.
Anyway, the machine receives a video signal, analyzes this signal, and uses it to determine when to send the button-push signal back to the game console. Two seconds is enough time for the computer to have completed its analysis, so the machine literally can't fail.
Why do this project? For one, it's an interesting look at video-analysis. Visual recognition is currently one of the weaker areas in computers. Show a computer a picture and it will interpret it only as a matrix of colors. Attempts to recognize pictures of people, useful for "We have a picture of a person walking into an airport. Is this person one of these people who are wanted criminals?" have been foiled by wearing different glasses, growing or shaving facial hair, or other things that wouldn't fool a human for even a second. There is big money, then, in getting computers to actually understand what it is that they are looking at.
For another, it's the "because I can" effect. Getting a computer to exactly copy a human action is an impressive boast.
Guitar hero is a "rhythym game," in which players use a plastic replica of an electric guitar to press buttons as they appear on a scrolling musical scale, in tribute to Bach's famous quote about musicianship: “I just press the right keys (buttons) at the right time and the organ does the rest. ” The guitar vaugely resembles the actual action of playing a guitar, in which a guitarist holds down some of the strings to change the effective length, and thus the pitch, of the vibrating string. Players of Guitar hero have about two seconds head-warning before they need to press the respective button.
Anyway, the machine receives a video signal, analyzes this signal, and uses it to determine when to send the button-push signal back to the game console. Two seconds is enough time for the computer to have completed its analysis, so the machine literally can't fail.
Why do this project? For one, it's an interesting look at video-analysis. Visual recognition is currently one of the weaker areas in computers. Show a computer a picture and it will interpret it only as a matrix of colors. Attempts to recognize pictures of people, useful for "We have a picture of a person walking into an airport. Is this person one of these people who are wanted criminals?" have been foiled by wearing different glasses, growing or shaving facial hair, or other things that wouldn't fool a human for even a second. There is big money, then, in getting computers to actually understand what it is that they are looking at.
For another, it's the "because I can" effect. Getting a computer to exactly copy a human action is an impressive boast.
Thursday, December 23, 2010
Feed Speaker
I am subscribed to a lot of blogs. They have interesting content, and I'd like to be able to read them. I'm also strapped for time and really should exercise more. Something's got to give.
I'm imagining a device like an mp3 player. Only, instead of mp3s, it's loaded with text. Text that it loaded from my various feeds. I put on the headphones and hit play, and it reads me the news of the world in a synthesized voice. I can keep up to date with information...while jogging, lifting weights, or otherwise occupied.
This would probably be useful to other people too -- citizens of a democracy have to be well informed on current events, or democracy dies. In an increasingly time-strapped world, being able to do two things at once reasonably well is a welcome gift. I'm hoping that this leads to more news consumption than before, but somehow I doubt it.
I'm imagining a device like an mp3 player. Only, instead of mp3s, it's loaded with text. Text that it loaded from my various feeds. I put on the headphones and hit play, and it reads me the news of the world in a synthesized voice. I can keep up to date with information...while jogging, lifting weights, or otherwise occupied.
This would probably be useful to other people too -- citizens of a democracy have to be well informed on current events, or democracy dies. In an increasingly time-strapped world, being able to do two things at once reasonably well is a welcome gift. I'm hoping that this leads to more news consumption than before, but somehow I doubt it.
Tuesday, November 30, 2010
Linus Akesson's Auto Plant Maintainer
A Swedish programmer, engineer, and mechatronics expert, Linus Akesson, made an interesting device to detect the moisture level in a potted plant's soil. He stopped there, on the grounds that if it over-watered, it would ruin the wood floor under the pot. Based on his work, I think I have a completely automated plant care device.
His machine measures the soil's resistance to electrical flow. Moisture decreases this resistance. So, a microcontroller, connected to a probe wire and a relay, can probe periodically as in Mr. Akesson's design, but rather than merely recording and reporting this resistance, will, at a certain level, activate the relay, which will open a valve and water the plant. It can close it after a set period of time, or it can close it when the resistance has reached a certain level.
A microcontroller could also log this data to a computer, turn on or off an electric light for indoor or space growing operations, alert me if it runs out of water, or activate some sort of a camera and store the pictures so I get a "time lapse" of the plant's growth.
If I could only get it to monitor soil nutrition and chemistry too, then I've pretty much made a machine that automates plant care. And that's of interest to homeowners, gardeners, and farmers.
Mr. Akesson also does projects with musical microcontrollers, a chiptune piano, which he also demonstrates to great effect, and a rather good description of the historical applications of TTY technology and why modern computers support them despite very few people or institutions still having anything remotely similar to a teletype.
His machine measures the soil's resistance to electrical flow. Moisture decreases this resistance. So, a microcontroller, connected to a probe wire and a relay, can probe periodically as in Mr. Akesson's design, but rather than merely recording and reporting this resistance, will, at a certain level, activate the relay, which will open a valve and water the plant. It can close it after a set period of time, or it can close it when the resistance has reached a certain level.
A microcontroller could also log this data to a computer, turn on or off an electric light for indoor or space growing operations, alert me if it runs out of water, or activate some sort of a camera and store the pictures so I get a "time lapse" of the plant's growth.
If I could only get it to monitor soil nutrition and chemistry too, then I've pretty much made a machine that automates plant care. And that's of interest to homeowners, gardeners, and farmers.
Mr. Akesson also does projects with musical microcontrollers, a chiptune piano, which he also demonstrates to great effect, and a rather good description of the historical applications of TTY technology and why modern computers support them despite very few people or institutions still having anything remotely similar to a teletype.
Monday, November 29, 2010
Entangled Tablet
So lately I've seen searches in the analytics for a "Quantum Entanglement NIC." This isn't possible under current technology, but it gives me an excellent idea.
A Quantum Entanglement network interface card would be useful because it would be linked to another card, and the two cards would act as if physically touching, even if separated by miles, astronomical units, or even lightyears. Literally instantaneous and uninterceptable communication. If it had a slot for a RJ-45 or wireless connection on top of that, so much the better. If it didn't, well, there's enough add-on slots in the desktop computer's motherboard to have a traditional network interface card.
Anyway, my idea is that we have an internet tablet (think like an i-Pad), and it has a quantum entanglement NIC connection to your desktop computer. The tablet will not run its own software, but instead be a mobile peripheral to your desktop machine. It is a terminal that fits in your briefcase, backpack, or other carrying device. It probably doesn't fit into a pocket or purse, but they're working on those. You would have all the features of your desktop machine, like high speed internet, your data, games, and so on, in a portable and useful form. The only thing the tablet wouldn't do well is type, unless your brought some sort of keyboard attachment.
You could have one entanglement tablet per quantum NIC installed, up to as many expansion slots as your computer has. This further expands the usefulness of computers if they have multi-user operating systems installed, and most these days are, for security and remote access's sake. One computer could be shared by four or five people, who access it through their tablets.
If you're not prone to losing physical objects, this would be good for security, too. It is literally not possible to intercept a quantum entanglement. There is no signal to snoop. However, if you do lose the tablet, you've basically given your computer over to whoever takes it, at least until you shut it down and remove the entanglement card. That would be scary.
Now, the current obstacles to this are in entanglement itself. For all its promises, quantum entanglement is a very fragile phenomenon. Entangled particles are difficult to keep entangled. If you disturb them, the entanglement is lost. If you read or write to the connection, it typically breaks immediately after. We're not even sure that a permanent entanglement is possible in theory. We're not even sure if that instantaneous effect works over long distances, or if it's limited to the speed of light like everything else. The current world record for entangled particles brought them 16km apart before the connection was lost. If you want a NIC, it's going to have to read or write so much more than than 1 bit before failing.
A Quantum Entanglement network interface card would be useful because it would be linked to another card, and the two cards would act as if physically touching, even if separated by miles, astronomical units, or even lightyears. Literally instantaneous and uninterceptable communication. If it had a slot for a RJ-45 or wireless connection on top of that, so much the better. If it didn't, well, there's enough add-on slots in the desktop computer's motherboard to have a traditional network interface card.
Anyway, my idea is that we have an internet tablet (think like an i-Pad), and it has a quantum entanglement NIC connection to your desktop computer. The tablet will not run its own software, but instead be a mobile peripheral to your desktop machine. It is a terminal that fits in your briefcase, backpack, or other carrying device. It probably doesn't fit into a pocket or purse, but they're working on those. You would have all the features of your desktop machine, like high speed internet, your data, games, and so on, in a portable and useful form. The only thing the tablet wouldn't do well is type, unless your brought some sort of keyboard attachment.
You could have one entanglement tablet per quantum NIC installed, up to as many expansion slots as your computer has. This further expands the usefulness of computers if they have multi-user operating systems installed, and most these days are, for security and remote access's sake. One computer could be shared by four or five people, who access it through their tablets.
If you're not prone to losing physical objects, this would be good for security, too. It is literally not possible to intercept a quantum entanglement. There is no signal to snoop. However, if you do lose the tablet, you've basically given your computer over to whoever takes it, at least until you shut it down and remove the entanglement card. That would be scary.
Now, the current obstacles to this are in entanglement itself. For all its promises, quantum entanglement is a very fragile phenomenon. Entangled particles are difficult to keep entangled. If you disturb them, the entanglement is lost. If you read or write to the connection, it typically breaks immediately after. We're not even sure that a permanent entanglement is possible in theory. We're not even sure if that instantaneous effect works over long distances, or if it's limited to the speed of light like everything else. The current world record for entangled particles brought them 16km apart before the connection was lost. If you want a NIC, it's going to have to read or write so much more than than 1 bit before failing.
Sunday, November 21, 2010
Automated Data Entry
Image via Wikipedia
If one asked me to automate it, I would first start with OCR technology. OCR can, given a scanned page, translate the pixels into words. The reliability is pretty good if one can guarentee that the paper was scanned perfectly straight, and the original page's handwriting is reasonably legible. A mechanical arm would place the paper in the scanner, activate the scanner, and pass the result to an OCR program, and the data entry clerk's job is now reduced to verification. Any words missed or copied erroneously must be fixed, but it's easier than typing out everything by hand.
Of course, this is all expensive and difficult, which is why they pay you to do it.
Friday, November 12, 2010
AutoDefrag
Defragmentation is a useful thing to have on traditional style hard drives. It moves around your data so that it's contiguous, which makes it read and write faster. Fragmented data has been broken into little chunks around your hard drive (because that's all the room there was available at the time), and to operate it, your computer has to play a "choose your own adventure" game from hell, hopping to various sectors to get every little bit.
However, a badly fragmented drive takes hours to fix up. While one can, on more recent OSes, schedule the defrag to run overnight, and leave your computer on, more likely people ignore this until the computer is slow as hell, and then wonder why. When the resulting defrag takes more than 24 hours, they're kind of upset.
More recent filesystems note that one does not constantly write data to the disk, and spends spare moments passing a file around the disk to defragment it. This is called online defragmentation, and it's so efficient that you don't notice it. (Unless you're constantly downloading huge files via your impossibly fast optical fiber connection, but people who do that probably have their own ways of dealing with it.) With online defragmentation, fragmentation never gets a chance to get seriously started, because ignoring it for a few seconds tends to schedule it for defragmentation. A few microseconds later, and it is defragmented.
However, there is one kind of drive that fragmentation is not a bad thing. SSD drives are not a magnetic platter like traditional drives. They are a large connection of Flash EEPROM chips. The drive can get any part of the data on it equally fast, no matter how many pieces it's in. The main downside is that the information can only be changed so many times before that particular chip just plain breaks down. Defragging an SSD drive only prematurely ages the disk for no apparent gain. Other technologies, like load balancing and TRIM, keep the disk lasting longer, and the user will want to use a filesystem that uses these technologies. SSDs tend to be smaller and more expensive, so are primarily useful for things you want to load often and change little, like the operating system and executable files. Your main data would be stored on another, more traditional, disk.
However, a badly fragmented drive takes hours to fix up. While one can, on more recent OSes, schedule the defrag to run overnight, and leave your computer on, more likely people ignore this until the computer is slow as hell, and then wonder why. When the resulting defrag takes more than 24 hours, they're kind of upset.
More recent filesystems note that one does not constantly write data to the disk, and spends spare moments passing a file around the disk to defragment it. This is called online defragmentation, and it's so efficient that you don't notice it. (Unless you're constantly downloading huge files via your impossibly fast optical fiber connection, but people who do that probably have their own ways of dealing with it.) With online defragmentation, fragmentation never gets a chance to get seriously started, because ignoring it for a few seconds tends to schedule it for defragmentation. A few microseconds later, and it is defragmented.
However, there is one kind of drive that fragmentation is not a bad thing. SSD drives are not a magnetic platter like traditional drives. They are a large connection of Flash EEPROM chips. The drive can get any part of the data on it equally fast, no matter how many pieces it's in. The main downside is that the information can only be changed so many times before that particular chip just plain breaks down. Defragging an SSD drive only prematurely ages the disk for no apparent gain. Other technologies, like load balancing and TRIM, keep the disk lasting longer, and the user will want to use a filesystem that uses these technologies. SSDs tend to be smaller and more expensive, so are primarily useful for things you want to load often and change little, like the operating system and executable files. Your main data would be stored on another, more traditional, disk.
Saturday, November 6, 2010
Embedded DNS
DNS servers are a very simple computer that must be online and connected to the Internet 24/7. You can make it do other things too, but the important thing is that it's able to direct people who ask about your domain to your computer, even if they ask at obscure times like the very early hours of the morning. DNS is simple work, so most people make these computers do other work for them as well, like email gateways, load balancing, or some other task.
Computers use electricity to stay on. But not in the same amounts. A computer with an overclocked, top of the line processor, a massive RAID array, and deep deep banks of ram is going to use significantly more power than a budget CPU at factory set speed with a "green" hard drive. Electricity costs money. Not much, but it adds up over time.
I'm imagining a very simple embedded computer. It uses a very low power CPU. It has a modest amount of RAM. It has a flash drive with a basic OS and DNS support and configuration. And it has a robust network card. With a 5V DC connector, I store it in my local ISP's closet, where it can easily get power and bandwidth. It doesn't need hard drives. It doesn't need a monitor. It has no moving parts, and will gleefully point people to your servers for years and years and years.
The cost to run this thing is minuscule. we could get the cost of them down to maybe $80 at most if we print a lot of them, and that's assuming a proprietary CPU architecture like ARM. ISPs could store entire closets full of them for all their customer's hosting needs. Just one problem.
I can either make it reconfigurable on the fly, or I can lock it down so that it's hard to alter. If I make it hard to alter, then you'd have to go to your ISP's closet to change it, which is a pain if you have to make a lot of changes. (Changes like new domains, moved your computer to a new IP, or whatever.) If I make it able to take your connection from your desk PC, then it's so much more convenient, but runs the risk that someone may be able to hack your password, spoof being you, and poison your information with fakes. Suddenly, your website redirects to l33t Bob's house of hackery, cleverly disguised as your company's website and stealing your customer's information for nefarious purposes!
I could compromise and allow it to only connect from one IP, and require a special encryption key to do so.
Computers use electricity to stay on. But not in the same amounts. A computer with an overclocked, top of the line processor, a massive RAID array, and deep deep banks of ram is going to use significantly more power than a budget CPU at factory set speed with a "green" hard drive. Electricity costs money. Not much, but it adds up over time.
I'm imagining a very simple embedded computer. It uses a very low power CPU. It has a modest amount of RAM. It has a flash drive with a basic OS and DNS support and configuration. And it has a robust network card. With a 5V DC connector, I store it in my local ISP's closet, where it can easily get power and bandwidth. It doesn't need hard drives. It doesn't need a monitor. It has no moving parts, and will gleefully point people to your servers for years and years and years.
The cost to run this thing is minuscule. we could get the cost of them down to maybe $80 at most if we print a lot of them, and that's assuming a proprietary CPU architecture like ARM. ISPs could store entire closets full of them for all their customer's hosting needs. Just one problem.
I can either make it reconfigurable on the fly, or I can lock it down so that it's hard to alter. If I make it hard to alter, then you'd have to go to your ISP's closet to change it, which is a pain if you have to make a lot of changes. (Changes like new domains, moved your computer to a new IP, or whatever.) If I make it able to take your connection from your desk PC, then it's so much more convenient, but runs the risk that someone may be able to hack your password, spoof being you, and poison your information with fakes. Suddenly, your website redirects to l33t Bob's house of hackery, cleverly disguised as your company's website and stealing your customer's information for nefarious purposes!
I could compromise and allow it to only connect from one IP, and require a special encryption key to do so.
Saturday, October 30, 2010
Newsspeaker
I hear that Bunnie's Chumby device is quite popular. It's a small, portable computer that can report on preprogrammed feeds of information, so you can always know what's on twitter, or in the news, or how your favorite stock is doing. While you're on the subway. Or at the dentist's. Or at work. (Warning: Do actual work while at work. Checking twitter or whatever all day will get you fired.) It's portable, and can get its Internet connection from any wireless connection, which are more and more common these days.
There are some times when you might want the information, but looking at it is a bad idea. Like, while you're driving. Taking your eyes off the road for even a second is a terrible idea, even if your stocks are tanking, your best friend is having a tempest-like breakup, or the news announced that we're now at war with the entire rest of the earth. As important as these things may be, they're not more important than not colliding with another car at 100KPH.
So, I propose either an addition to the chumby, or a separate device, that connects to a bluetooth receiver in your ear and reads out, in the best synthesized voice that it can manage, a summary of events as they are brought to the device's attention. It would be no more distracting than a radio, which are standard in cars and haven't proven too severe a distraction. It wouldn't be very harsh on the device's CPU, either, due to big advances in speech synthesis. It will sound like a droning robot, which would probably get very old, very fast.
Bonus points if speech transcription software is running at the same time, so you could verbally address the thing. Even if to say, "Computer, shut up." Operating the thing while not being distracted is paramount.
There are some times when you might want the information, but looking at it is a bad idea. Like, while you're driving. Taking your eyes off the road for even a second is a terrible idea, even if your stocks are tanking, your best friend is having a tempest-like breakup, or the news announced that we're now at war with the entire rest of the earth. As important as these things may be, they're not more important than not colliding with another car at 100KPH.
So, I propose either an addition to the chumby, or a separate device, that connects to a bluetooth receiver in your ear and reads out, in the best synthesized voice that it can manage, a summary of events as they are brought to the device's attention. It would be no more distracting than a radio, which are standard in cars and haven't proven too severe a distraction. It wouldn't be very harsh on the device's CPU, either, due to big advances in speech synthesis. It will sound like a droning robot, which would probably get very old, very fast.
Bonus points if speech transcription software is running at the same time, so you could verbally address the thing. Even if to say, "Computer, shut up." Operating the thing while not being distracted is paramount.
Thursday, October 21, 2010
A Computer for Sir Terry
Sir Terry Pratchett, Order of the British Empire, is a very prolific writer, most famous for his Discworld series of books. He is known for a major use of computers and the Internet, and yet has said in an interview that his favorite computer of all time was the first one he owned, the ZX Spectrum. That computer also proved insufficient for modern works, as the same interview reported that his novels exceed 10 MB by the time of their completion, and the ZX Spectrum had only 64k of memory and recorded to tapes, a slow and sequential process.
Hobby groups exist that recreate old computers, and if Sir Pratchett requested, I would ask one of them to make a portable computer with the features of the ZX Spectrum, like its rubber chiclet keyboard, and write software with a similar interface. We would need at least a word-processing program and an Internet browser. I would ask that it have at least 100MB of RAM, a framebuffer (with more RAM, the Spectrum supported a 256×192 color display, and I would want better than that), RF Video, and wireless network access. It will also need Z80 emulation if it isn't using a Z80 type processor (do they even make those anymore?), and NFS (Network File System, a means of storing files on a separate computer that it reaches with a network). The reason for these will become apparent in the next component.
I would then provide Mr. Pratchett's home with a wireless network, and an immense server with RAID 10. The RAID would be shared with NFS, and here is where the Spectrum clone's OS and programs would be stored. It would have room for terrabytes of data for Sir Pratchett to use, and would provide the Internet connection. The RAID setup would ensure both a metric insane supply of space, plus reliability in case of the loss of any of the drives. Ideally, the drives would be setup for hot swapping, to minimize issues for the end user.
One major obstacle I have this this is my unfamiliarity with the ZX Spectrum, which was not sold in the United States where I live. Pictures of it suggest that it was portable and battery-powered, broadcasting video to a television set. Articles describe it as receiving its programming from audio cassette, yet only some models have a visible tape player, leaving me wondering how the other models received programs. (burnt permanently to a ROM, perhaps?) The design would be much simpler if wired connections could be involved, since electrical power can be received via socket, wired Internet is faster and more reliable than wireless, as well as more difficult to snoop from the outside. And video to the television almost assuredly works better if we can send it by a wire, instead of having to broadcast it. I would want the device to resemble the Spectrum in physical size, and operate similarly to one only better, since it could view the Internet, which the average person couldn't in 1982, edit entire novels, and save to the massive server instead of a gazillion tape cassettes. I have literally no idea how the ZX's OS operated, and so can't help with the programming.
I would encourage the creators of the Spectrum clone to license the design with Sinclair Research Ltd, the company that invented the Spectrum, and sell it together. I could probably be convinced to buy one.
Sir Pratchett, however, would probably prefer that people donate towards Alzheimer's research, as he was diagnosed with the disease in 2007, and has noted with quite a lot of alarm that research into the cause and treatment of the disease has been somewhat lacking.
Hobby groups exist that recreate old computers, and if Sir Pratchett requested, I would ask one of them to make a portable computer with the features of the ZX Spectrum, like its rubber chiclet keyboard, and write software with a similar interface. We would need at least a word-processing program and an Internet browser. I would ask that it have at least 100MB of RAM, a framebuffer (with more RAM, the Spectrum supported a 256×192 color display, and I would want better than that), RF Video, and wireless network access. It will also need Z80 emulation if it isn't using a Z80 type processor (do they even make those anymore?), and NFS (Network File System, a means of storing files on a separate computer that it reaches with a network). The reason for these will become apparent in the next component.
I would then provide Mr. Pratchett's home with a wireless network, and an immense server with RAID 10. The RAID would be shared with NFS, and here is where the Spectrum clone's OS and programs would be stored. It would have room for terrabytes of data for Sir Pratchett to use, and would provide the Internet connection. The RAID setup would ensure both a metric insane supply of space, plus reliability in case of the loss of any of the drives. Ideally, the drives would be setup for hot swapping, to minimize issues for the end user.
One major obstacle I have this this is my unfamiliarity with the ZX Spectrum, which was not sold in the United States where I live. Pictures of it suggest that it was portable and battery-powered, broadcasting video to a television set. Articles describe it as receiving its programming from audio cassette, yet only some models have a visible tape player, leaving me wondering how the other models received programs. (burnt permanently to a ROM, perhaps?) The design would be much simpler if wired connections could be involved, since electrical power can be received via socket, wired Internet is faster and more reliable than wireless, as well as more difficult to snoop from the outside. And video to the television almost assuredly works better if we can send it by a wire, instead of having to broadcast it. I would want the device to resemble the Spectrum in physical size, and operate similarly to one only better, since it could view the Internet, which the average person couldn't in 1982, edit entire novels, and save to the massive server instead of a gazillion tape cassettes. I have literally no idea how the ZX's OS operated, and so can't help with the programming.
I would encourage the creators of the Spectrum clone to license the design with Sinclair Research Ltd, the company that invented the Spectrum, and sell it together. I could probably be convinced to buy one.
Sir Pratchett, however, would probably prefer that people donate towards Alzheimer's research, as he was diagnosed with the disease in 2007, and has noted with quite a lot of alarm that research into the cause and treatment of the disease has been somewhat lacking.
Monday, October 4, 2010
Curing ALS
Amyotrophic lateral sclerosis, known as ALS in most of the world, and "Lou Gehrig's disease" in the United States, (after a famous baseball player who died of it), is a motor neuron disease that slowly kills off the part of your brain that tells your body how to move. The result being that a person suffering from it slowly becomes more and more paralyzed until they can't breathe anymore, at which point they die. The rest of the brain is unaffected. A similar condition is Locked-in Syndrome, in which a person abruptly becomes paralyzed, usually after a stroke. (In which case the motor-control cortex of the brain probably died in the stroke.)
Motor-cortex conditions are rather baffling to treat. The muscles are technically fine, but the person can't move them. The muscles then deteriorate from a lack of moving. The problem lies in the brain, which we know the least about and are the most afraid of messing around with, lest we make it worse.
I think, in the circumstances, I would want to try to invent a cybernetic motor-cortex replacement. This would take 100 years of neurology research, and the best electronic-communication experts known to man. And having done so, no one would ever be paralyzed for brain-reasons again. If this research also found a good way to reconnect severed nerves, then all paralysis would thereafter be treatable.
Motor-cortex conditions are rather baffling to treat. The muscles are technically fine, but the person can't move them. The muscles then deteriorate from a lack of moving. The problem lies in the brain, which we know the least about and are the most afraid of messing around with, lest we make it worse.
I think, in the circumstances, I would want to try to invent a cybernetic motor-cortex replacement. This would take 100 years of neurology research, and the best electronic-communication experts known to man. And having done so, no one would ever be paralyzed for brain-reasons again. If this research also found a good way to reconnect severed nerves, then all paralysis would thereafter be treatable.
Friday, September 24, 2010
Genetic Laptop
Embedded programming is kind of different than the kind that made your web browser. The programmer is constrained to much smaller limits. The programmer is not executing it in the same environment that the program is written with. (Most embedded computers don't even have screens!) As a result, they have developed all kinds of interesting tools to make their process far less headache inducing.
For one, embedded computers are often completely redesigned by the electronics expert, in order to accommodate some new requested feature, or because better hardware has been invented. (Desktop computers use slots, so you can just remove obsolete parts and pop in the replacement. No such luck in the embedded world -- the parts are permanently soldered together.) If this doesn't work, then the cost of fabricating it is wasted. So the circuit is first designed completely in simulation, so no silicon is ever wasted. They're not manufactured until proven to match the standards given perfectly. When it runs perfectly in simulation, then the real thing is made and compared. If they don't match, lots of research goes into why not.
I think I want to use this with genetic programming to invent the perfect laptop. We apply genetic programming to circuits. The fitness function calls for a laptop that runs a fixed OS image, with the minimum amount of electricity, at the maximum possible speed. So laptops that evolve with, say, no CPU, will be selected against. (The OS doesn't run with no CPU.) I similarly plan to have it eliminate devices that have no input, no output, or no sound. Circuits will also have genetic components for various types of CPU, various makers of sound chips, ram chips, north and south bridges, video chips, and wiring to connect them. I predict a very good design will emerge within 1 billion generations.
I will want to have this design studied. Did it evolve one CPU, or several? Did it use a video chip, or expect the CPU to handle all video rendering? (That's common with microcontrollers that have video-output. A video chip costs extra, so they make the CPU do it instead.) What manufacturers did it select? How long can I run it on this cheap lawnmower battery? What is it using for storage? (Traditional hard drive? If so, SATA or IDE? Flash chips? SSD? Something different altogether?) Did it skimp on any particular feature to save power? (Maybe it's sparse on RAM chips, as those have to be kept powered continuously so as not to lose their signal.)
My last question will be: "What would it cost to manufacture?" Quite possibly, it could prove...profitable.
For one, embedded computers are often completely redesigned by the electronics expert, in order to accommodate some new requested feature, or because better hardware has been invented. (Desktop computers use slots, so you can just remove obsolete parts and pop in the replacement. No such luck in the embedded world -- the parts are permanently soldered together.) If this doesn't work, then the cost of fabricating it is wasted. So the circuit is first designed completely in simulation, so no silicon is ever wasted. They're not manufactured until proven to match the standards given perfectly. When it runs perfectly in simulation, then the real thing is made and compared. If they don't match, lots of research goes into why not.
I think I want to use this with genetic programming to invent the perfect laptop. We apply genetic programming to circuits. The fitness function calls for a laptop that runs a fixed OS image, with the minimum amount of electricity, at the maximum possible speed. So laptops that evolve with, say, no CPU, will be selected against. (The OS doesn't run with no CPU.) I similarly plan to have it eliminate devices that have no input, no output, or no sound. Circuits will also have genetic components for various types of CPU, various makers of sound chips, ram chips, north and south bridges, video chips, and wiring to connect them. I predict a very good design will emerge within 1 billion generations.
I will want to have this design studied. Did it evolve one CPU, or several? Did it use a video chip, or expect the CPU to handle all video rendering? (That's common with microcontrollers that have video-output. A video chip costs extra, so they make the CPU do it instead.) What manufacturers did it select? How long can I run it on this cheap lawnmower battery? What is it using for storage? (Traditional hard drive? If so, SATA or IDE? Flash chips? SSD? Something different altogether?) Did it skimp on any particular feature to save power? (Maybe it's sparse on RAM chips, as those have to be kept powered continuously so as not to lose their signal.)
My last question will be: "What would it cost to manufacture?" Quite possibly, it could prove...profitable.
Wednesday, September 15, 2010
Screws
The parts of my computer are held together with screws. Very tiny screws. Maybe a centimeter in size. Sometimes I have to take them apart. Usually for cleaning, as they attract crazy amounts of dust. Or, alternatively, to upgrade something. (New hard drive, more RAM, replace something that broke, new CPU, or whatever.)
I try to keep the screws organized. Mostly, because if I lose one, there is no hope of ever replacing it. Despite the standardization in screw sizes, they have no real names. "Screw with length of 1cm and hexigonal head and pitch size of 5mm." Well, more like "Case screw," "extension card screw," and "Fan attachment screw." The extension card screws are like a slightly smaller version of case screws. All of them use a philips head screwdriver, and the only way I have of getting new ones when I lose one is to order a new part, which will come with new screws for pretty much this reason.
I have a dismantled power supply, which I took apart to clean the dust out of it. It's from a dead computer. I can't put it back together, because I put the screws in my big screw case. I cannot find the big screw case. Accordingly, unless I somehow cough up eight "fan screws," and four "tiny screws," I can't fully put it back together. It sits, semi-dismantled, on my desk.
These screws cannot be terribly expensive, but without a name for them, I can't order more. Not without buying YET MORE PARTS.
I try to keep the screws organized. Mostly, because if I lose one, there is no hope of ever replacing it. Despite the standardization in screw sizes, they have no real names. "Screw with length of 1cm and hexigonal head and pitch size of 5mm." Well, more like "Case screw," "extension card screw," and "Fan attachment screw." The extension card screws are like a slightly smaller version of case screws. All of them use a philips head screwdriver, and the only way I have of getting new ones when I lose one is to order a new part, which will come with new screws for pretty much this reason.
I have a dismantled power supply, which I took apart to clean the dust out of it. It's from a dead computer. I can't put it back together, because I put the screws in my big screw case. I cannot find the big screw case. Accordingly, unless I somehow cough up eight "fan screws," and four "tiny screws," I can't fully put it back together. It sits, semi-dismantled, on my desk.
These screws cannot be terribly expensive, but without a name for them, I can't order more. Not without buying YET MORE PARTS.
Wednesday, September 8, 2010
Quantum Computing
The news likes to report about Quantum computing as if it were a device that you could plug in to your wall, attach to your monitor and keyboard, and then immediately begin typing away at. This is wrong.
Quantum computing is an emerging technology that we're still working on the theoretical level. Like DNA "computers," which you can't type on, because they're a beaker full of bio-goo that they leave overnight to solve whatever problem, and then take apart the goo the next day. The goo often finds interesting solutions to complex problems, but it's not going to show up on a screen.
Similarly, existing quantum computers are series of atoms entangled in a very complicated way such as to allow researchers to "read" their properties, and "write" them by changing them. If the technology is ever commercialized, it'll be an add on card for your existing computer, not a brand new machine.
Quantum computing's main advantage is taking advantage of "superposition," in which atoms can have more than one state at the same time, but this collapses if meddled with. And atoms being small as they are, even "reading" their position collapses them. So they specialize in problems where the correct solution can be verified, but there's no better way to find it than to just guess until you get it. You basically make a superposition, guessing at every possible answer at the same time, and the superposition collapses to form the correct answer, which you then verify with traditional silicon computing.
As of this writing, the most powerful quantum computer in the world had only 7 "qubits." basically, 7 linked atoms that could have about 49 distinct states. This is nowhere near being able to do any sort of practical thing, but you have to crawl before you can walk. They're working on one with 9 qubits, but to do anything practical, you'd need at least a hundred, and preferably a thousand.
Also, we'd have to have some way of reliably electronically "reading" and "writing" the atoms. Your quantum computer isn't going to be very valuable to you if you have to hire a quantum physicist just to learn what the results even are.
Quantum computing is an emerging technology that we're still working on the theoretical level. Like DNA "computers," which you can't type on, because they're a beaker full of bio-goo that they leave overnight to solve whatever problem, and then take apart the goo the next day. The goo often finds interesting solutions to complex problems, but it's not going to show up on a screen.
Similarly, existing quantum computers are series of atoms entangled in a very complicated way such as to allow researchers to "read" their properties, and "write" them by changing them. If the technology is ever commercialized, it'll be an add on card for your existing computer, not a brand new machine.
Quantum computing's main advantage is taking advantage of "superposition," in which atoms can have more than one state at the same time, but this collapses if meddled with. And atoms being small as they are, even "reading" their position collapses them. So they specialize in problems where the correct solution can be verified, but there's no better way to find it than to just guess until you get it. You basically make a superposition, guessing at every possible answer at the same time, and the superposition collapses to form the correct answer, which you then verify with traditional silicon computing.
As of this writing, the most powerful quantum computer in the world had only 7 "qubits." basically, 7 linked atoms that could have about 49 distinct states. This is nowhere near being able to do any sort of practical thing, but you have to crawl before you can walk. They're working on one with 9 qubits, but to do anything practical, you'd need at least a hundred, and preferably a thousand.
Also, we'd have to have some way of reliably electronically "reading" and "writing" the atoms. Your quantum computer isn't going to be very valuable to you if you have to hire a quantum physicist just to learn what the results even are.
Tuesday, September 7, 2010
Life Analysis Device
Once upon a time there was an artist/engineer who made a system he called the Narsci-system, after Narcissus, the greek legend of a man in love with himself, and system, as it was made of multiple sensors that communicated with a single point for concatenation and analysis. Also, because it was "Narcissistic" to pay so much attention to the goings on of his own body.
So I'm imagining a medical version of this. It would continuously scan and record the goings on of the user's body, and communicate back to a sophisticated computer for analysis. The computer, using the best advice medical doctors could give, would give advice about certain situations. If one hadn't gone to bed for 20 hours straight, it would suggest doing so. It would suggest morning jogs, and even tell you to speed up or slow down. (There is an ideal heart rate to achieve in aerobic exercise.) It would complain if you moved too little, as measured by the accelerometer, as this suggests laziness, or a lack of exercise, or too much, as this suggests the need for a day off. If it found a problem it didn't know about, it would suggest a medical checkup.
And in extreme situations, maybe it could even actively intervene. Heart rate dangerously low? Stimulant injection! EEG showing abnormality? Cortical stimulation! Dangerous level of vibration? Deploy airbag!
So I'm imagining a medical version of this. It would continuously scan and record the goings on of the user's body, and communicate back to a sophisticated computer for analysis. The computer, using the best advice medical doctors could give, would give advice about certain situations. If one hadn't gone to bed for 20 hours straight, it would suggest doing so. It would suggest morning jogs, and even tell you to speed up or slow down. (There is an ideal heart rate to achieve in aerobic exercise.) It would complain if you moved too little, as measured by the accelerometer, as this suggests laziness, or a lack of exercise, or too much, as this suggests the need for a day off. If it found a problem it didn't know about, it would suggest a medical checkup.
And in extreme situations, maybe it could even actively intervene. Heart rate dangerously low? Stimulant injection! EEG showing abnormality? Cortical stimulation! Dangerous level of vibration? Deploy airbag!
Monday, September 6, 2010
In which I attempt to design a CPU, part 2
At this point, we'd lay out the electronics, grouping similar parts together. All the math circuits are near the logic circuits, combining to form an ALU, Arithmetic and Logic Unit. The Control circuits are nearby, and with our leftover space, we have little chunks of super-fast, super-expensive, memory, called Registers.
This is typically so complex that developers turn to software tools, like Verilog, which takes a list of the requirements, laid out like a C program, and passes them onto a program that lays out a circuit design that accomplishes what was specified. Then that can be given to your fabricator, who cranks out the chips.
But before you crank them out, first you get one, and you test it. No sense in paying for defective chips, right? In fact, that's part of the reason for all the computer support. They can simulate your chips before you even make them, so you know your design is good before you've made even one. Then you make one and test that to prove that the simulation was accurate.
This is typically so complex that developers turn to software tools, like Verilog, which takes a list of the requirements, laid out like a C program, and passes them onto a program that lays out a circuit design that accomplishes what was specified. Then that can be given to your fabricator, who cranks out the chips.
But before you crank them out, first you get one, and you test it. No sense in paying for defective chips, right? In fact, that's part of the reason for all the computer support. They can simulate your chips before you even make them, so you know your design is good before you've made even one. Then you make one and test that to prove that the simulation was accurate.
Sunday, September 5, 2010
In which I attempt to design a CPU, part 1
The CPU is a chip at the core of your computer. It does all the "work" that lets you do useful stuff: all the math, all the logical processing, to create an environment in which you can work. It receives your input. It sends your output to a graphics chip for processing, or if none is available, makes the picture itself. It does all your calculations. And it's very very complicated.
So if I'm going to make one, first I'll need an instruction set, that controls what it can do. First I have to decide what it needs to do: Math, logic, control, and manipulation. Also, a little bit of storage, so the calculations we do last until we can put them in memory.
* Math
- Addition
- Subtraction
- Multiplication
- Division
- Modulo (Divide, throw away the result, and provide the remainder.)
* Control
- Unconditional jump ("Goto")
- Branch if ("If X then Y")
- Store (in memory)
- Load (from memory)
- Interrupts (little microcode programs that do everything from print to quit)
- Rotate left (Shift every bit left by one, carry over the leftmost to a flag, and the flag to the rightmost. Encryption uses this.)
- Rotate Right (Like rotate left, but in the opposite direction)
* Logic
- AND (Both conditions required)
- OR (Either condition required)
- NOT (Switch to the other)
- XOR (One, or the other, but not both)
- Compare (Is X the same as Y?)
* Redundant (things that are covered by the above, but I want a special code for because the special code can do whatever FASTER.)
- Increment (Add 1)
- Decrement (Subtract 1)
- Shift left (Effectively, multiply this by 2)
- Shift Right (effectively, divide this by 2)
Each of these will need to be assigned a number, which tells the CPU to use that operation. And then either we'll have to have circuits that perform that action fabricated, or microcode to perform that operation written. Most CPUs these days use microcode, because it's faster and cheaper. I think, though, that I'd rather implement with hardwired logic, on the grounds that a more power-efficient design can be made that way. Besides, this is supposed to be simple.
Now arguably, multiplication and division are redundant operations too, because multiplication can be implemented as a loop of additions, and divison as a loop of subtractions, but they're so common in computer operations that I think they do deserve their own opcode. Besides, I do expect to be able to do floating point stuff.
So if I'm going to make one, first I'll need an instruction set, that controls what it can do. First I have to decide what it needs to do: Math, logic, control, and manipulation. Also, a little bit of storage, so the calculations we do last until we can put them in memory.
* Math
- Addition
- Subtraction
- Multiplication
- Division
- Modulo (Divide, throw away the result, and provide the remainder.)
* Control
- Unconditional jump ("Goto")
- Branch if ("If X then Y")
- Store (in memory)
- Load (from memory)
- Interrupts (little microcode programs that do everything from print to quit)
- Rotate left (Shift every bit left by one, carry over the leftmost to a flag, and the flag to the rightmost. Encryption uses this.)
- Rotate Right (Like rotate left, but in the opposite direction)
* Logic
- AND (Both conditions required)
- OR (Either condition required)
- NOT (Switch to the other)
- XOR (One, or the other, but not both)
- Compare (Is X the same as Y?)
* Redundant (things that are covered by the above, but I want a special code for because the special code can do whatever FASTER.)
- Increment (Add 1)
- Decrement (Subtract 1)
- Shift left (Effectively, multiply this by 2)
- Shift Right (effectively, divide this by 2)
Each of these will need to be assigned a number, which tells the CPU to use that operation. And then either we'll have to have circuits that perform that action fabricated, or microcode to perform that operation written. Most CPUs these days use microcode, because it's faster and cheaper. I think, though, that I'd rather implement with hardwired logic, on the grounds that a more power-efficient design can be made that way. Besides, this is supposed to be simple.
Now arguably, multiplication and division are redundant operations too, because multiplication can be implemented as a loop of additions, and divison as a loop of subtractions, but they're so common in computer operations that I think they do deserve their own opcode. Besides, I do expect to be able to do floating point stuff.
Wednesday, September 1, 2010
Cleaning Electronics
You know what I hate? When dust or dirt gets into my computers. It's such a pain in the ass to clean.
Electronics such as computers generates heat from electrical resistance, and state-transition, while operating. This heat must be removed, for the continuing health of the computer. Usually, this is done with a small fan, which blows cool air over the hot electronics. The air takes the heat with it as it blows away.
However, this air brings with it dust. Meteor debris, fibers from my clothing, bits of my discarded skin and hair, and small bits of debris combine to form dust, an annoying and quasi-sticky grey substance. It smells. It has excellent thermal insulation, and it loves to stick to electronics. The more dust it has, the harder it is to keep cool. Damn it. So, periodically, the dust has to be removed. From small tight places that cannot be washed, because water plus electronics equals extremely bad short circuit.
I've typically been using a damp (not wet, damp) cotton swab, and a damp paper towel on larger areas, to take out the dust, then leave it off for an extra hour just to be sure it's dry. Most computer professionals prefer compressed air, which makes short work of all the dust in the computer in one fell swoop. (Though it's really bad for the fans, which get accelerated to ludicrous speeds.)
The strangest available solution is to make a fishtank computer, which is sealed and cannot possibly get dirty. Wait, what? One takes a fish tank, those little fishtank rocks, the computer parts, and several gallons of mineral oil. Arrange the computer parts in the tank, with the hard drive outside. The hard drive must be in air, because it has a pressure equalization mechanism that gets ruined if exposed to mineral oil, breaking down the entire drive. Unless it's an SSD, then it's okay. Make sure all the wires are connected, and turn it on to prove that it works. Then add the mineral oil. Your computer now appears to operate "underwater." The mineral oil works because it's as clear as water, but unlike water, it is chemically nonpolar, so it will not interfere with the operation of the computer.
For best results, an electronics expert should build "port repeaters" on the lid, so you can plug everything into the lid, which plugs into the computer below. And lo, it runs, and it cannot attract dust, and it vents all heat into the mineral oil, which the fans swirl around, and use the glass as one big heat sink. Also, it muffles all sound produced by the computer, eliminating that annnoying fan whirr.
One downside is that if the electronics ever have to come out of the mineral oil...they're covered in mineral oil, and rather icky to the touch.
Readers: How would you clean electronics? Mind you, wet electronics will insantly short circuit if switched on, so water (and other polar chemicals) should be avoided at all costs.
Electronics such as computers generates heat from electrical resistance, and state-transition, while operating. This heat must be removed, for the continuing health of the computer. Usually, this is done with a small fan, which blows cool air over the hot electronics. The air takes the heat with it as it blows away.
However, this air brings with it dust. Meteor debris, fibers from my clothing, bits of my discarded skin and hair, and small bits of debris combine to form dust, an annoying and quasi-sticky grey substance. It smells. It has excellent thermal insulation, and it loves to stick to electronics. The more dust it has, the harder it is to keep cool. Damn it. So, periodically, the dust has to be removed. From small tight places that cannot be washed, because water plus electronics equals extremely bad short circuit.
I've typically been using a damp (not wet, damp) cotton swab, and a damp paper towel on larger areas, to take out the dust, then leave it off for an extra hour just to be sure it's dry. Most computer professionals prefer compressed air, which makes short work of all the dust in the computer in one fell swoop. (Though it's really bad for the fans, which get accelerated to ludicrous speeds.)
The strangest available solution is to make a fishtank computer, which is sealed and cannot possibly get dirty. Wait, what? One takes a fish tank, those little fishtank rocks, the computer parts, and several gallons of mineral oil. Arrange the computer parts in the tank, with the hard drive outside. The hard drive must be in air, because it has a pressure equalization mechanism that gets ruined if exposed to mineral oil, breaking down the entire drive. Unless it's an SSD, then it's okay. Make sure all the wires are connected, and turn it on to prove that it works. Then add the mineral oil. Your computer now appears to operate "underwater." The mineral oil works because it's as clear as water, but unlike water, it is chemically nonpolar, so it will not interfere with the operation of the computer.
For best results, an electronics expert should build "port repeaters" on the lid, so you can plug everything into the lid, which plugs into the computer below. And lo, it runs, and it cannot attract dust, and it vents all heat into the mineral oil, which the fans swirl around, and use the glass as one big heat sink. Also, it muffles all sound produced by the computer, eliminating that annnoying fan whirr.
One downside is that if the electronics ever have to come out of the mineral oil...they're covered in mineral oil, and rather icky to the touch.
Readers: How would you clean electronics? Mind you, wet electronics will insantly short circuit if switched on, so water (and other polar chemicals) should be avoided at all costs.
Subscribe to:
Posts (Atom)