Before this crisis, I heard many speculators claim that the United States had a shortage of skilled labor and a surplus of unskilled labor.
Unskilled labor were jobs that anyone could fill. Clerk at McDonalds. Bathroom janitor. Little training is required. Many of these jobs don't even need you to have graduated from secondary education yet. However, since everyone can do them, competition for them is fierce. Having the job can mean the difference between affording an apartment verses living under a bridge. Only so many of these jobs are available, since companies and people are only willing to pay a certain amount to get the bathroom clean.
Skilled labor requires university degrees, and prefers working experience. Doctors. Lawyers. Engineers. System Administrators. Scientists. Obtaining the human capital needed to work the position was a long and expensive prospect, which is why few people qualify. These people make vast amounts of money for the company they work for, and there is an nearly endless hunger for them. At least, for the ones who are provably good. Nobody's that interested in a shoddy engineer.
Scholarships would be a promising way of solving this disparity, but who would fund them?
Wednesday, November 26, 2008
Tuesday, November 25, 2008
Economy
The US, indeed, the world, is in economic trouble right now. The subprime mortgage bubble burst rather abruptly, and quite a few institutions were heavily invested in things that, to everyone's surprise, had no value whatsoever. Here in the states, the government is bailing out several companies, to the delight of some and annoyance of others.
Jobs would be key to resolving this problem, as additional jobs would mean additional spending and consumer confidence. Unfortunately, since the banking industry has been heavily involved in this crisis, there is a bit of a credit crunch. Everyone is too scared to lend money, which decreases the availability of both spending and jobs.
This problem may have started in the US, but it has extended to the entire world. Iceland's banks are mostly bankrupt from their involvement in the subprimes, and they've dragged the entire Icelandic government into bankruptcy with them. European banks are failing. Chinese factories are closing for justifiable fear that they will be unable to sell the goods they produce. (Justifiable because the US is their #1 customer, and the European market also buys a lot, neither can afford to buy as much anymore.)
Trying to trace the source of economics, in this case so that we can restart it, has always given me a headache. I'm frankly, in this case, reminded of Terry Pratchett's "Making Money," in which the main character reveals that the economy of ( the fictional nation in which he lives) is basically just one entire flimflam. A confidence game. Thinking about it, currency is essentially an excuse to reward people for working, so that people can have things and distribute them in a "fair" way. This drives libertarian-types absolutely bonkers, and they advocate a return to metal-backed currencies over the law-backed ones that currently exist. I don't think that would really change anything.
It's my opinion that the government's bailouts should attach to actual equity of the company, so that the executives that caused this mess will have to frantically re-earn their position by buying it back. Unfortunately, the United States has always been wary of government involvement of businesses, fearing that strong control may lead to totalitarianism. The bailouts will likely continue with no consequence to the parties responsible for the problem, which leads to "moral hazard."
"Moral Hazard" is an economist's way of saying that if a person escapes the consequences of their actions, it will inspire them to more irresponsible behavior. If you had an insurance policy that gave you a million dollars if you broke your leg, you might be less careful about keeping your leg unbroken. (Or you may even deliberately break it if you need the money. It's happened before.) Likewise, when you control the payroll, why not give yourself an unearned raise? If this leads to loss of confidence in your company, it's not your problem because you can go retire to Tahiti and your successor will just collect money to stay afloat.
Now I suppose it would be equally bad to leave corporations afraid to take any action lest it bankrupt them, but there must be some consequence to failure. Society fairs poorly when it tolerates double standards.
Also, I hear that the infrastructure could use a repair here. We should reinstate the CCC (Civilian Conservation Corp) to try and boost consumer spending and reduce unemployment.
Jobs would be key to resolving this problem, as additional jobs would mean additional spending and consumer confidence. Unfortunately, since the banking industry has been heavily involved in this crisis, there is a bit of a credit crunch. Everyone is too scared to lend money, which decreases the availability of both spending and jobs.
This problem may have started in the US, but it has extended to the entire world. Iceland's banks are mostly bankrupt from their involvement in the subprimes, and they've dragged the entire Icelandic government into bankruptcy with them. European banks are failing. Chinese factories are closing for justifiable fear that they will be unable to sell the goods they produce. (Justifiable because the US is their #1 customer, and the European market also buys a lot, neither can afford to buy as much anymore.)
Trying to trace the source of economics, in this case so that we can restart it, has always given me a headache. I'm frankly, in this case, reminded of Terry Pratchett's "Making Money," in which the main character reveals that the economy of ( the fictional nation in which he lives) is basically just one entire flimflam. A confidence game. Thinking about it, currency is essentially an excuse to reward people for working, so that people can have things and distribute them in a "fair" way. This drives libertarian-types absolutely bonkers, and they advocate a return to metal-backed currencies over the law-backed ones that currently exist. I don't think that would really change anything.
It's my opinion that the government's bailouts should attach to actual equity of the company, so that the executives that caused this mess will have to frantically re-earn their position by buying it back. Unfortunately, the United States has always been wary of government involvement of businesses, fearing that strong control may lead to totalitarianism. The bailouts will likely continue with no consequence to the parties responsible for the problem, which leads to "moral hazard."
"Moral Hazard" is an economist's way of saying that if a person escapes the consequences of their actions, it will inspire them to more irresponsible behavior. If you had an insurance policy that gave you a million dollars if you broke your leg, you might be less careful about keeping your leg unbroken. (Or you may even deliberately break it if you need the money. It's happened before.) Likewise, when you control the payroll, why not give yourself an unearned raise? If this leads to loss of confidence in your company, it's not your problem because you can go retire to Tahiti and your successor will just collect money to stay afloat.
Now I suppose it would be equally bad to leave corporations afraid to take any action lest it bankrupt them, but there must be some consequence to failure. Society fairs poorly when it tolerates double standards.
Also, I hear that the infrastructure could use a repair here. We should reinstate the CCC (Civilian Conservation Corp) to try and boost consumer spending and reduce unemployment.
Sunday, November 23, 2008
A quick thought about cars
In a world where OBDC-II exists, why the hell can't my car automatically check itself up?
And while we're at it, I should install one of those automatic tire-inflaters too.
And while we're at it, I should install one of those automatic tire-inflaters too.
Saturday, November 22, 2008
Auto Clean Bathroom
If there's one thing modern people all have in common, it's that they all loathe to do housework and yet feel it's necessary, because nobody likes a filthy, disorganized living space. So I'm starting automation plans so that it gets done for you and you can do more important things, like your job, your hobbies, raising any children you have, or maybe finishing that great work of yours.
The bathroom is a room of entirely fixed features. There's a toilet to dispose of waste, a bathtub or shower to clean off one's body, and a sink for hand washing. All of these tend to collect the filth they remove from your body and periodically need to be cleaned, a nasty chore frequently mentioned in rants about how people hate housework.
Since none of these move, and cleaning consists of the exact same motions every time, automation is fairly straight forward. Robotic arms attached to the ceiling can, on a cue or timed signal, lower a scrubbing brush with a cleaning-solution squirter, squirt solution, scrub, and flush away. Human maintenance would only involve installing the system, setting up the cue (Press button to clean, or "Run every day at 3am"), and refilling the cleaning solution tanks. The brushes should also be interchangeable, as they wear out with use. They will need to be replaced when that happens.
The toilet's cleaner consists of a squirting nozzle and a brush. Squirt solution on the side of the bowl, then swirl the brush down the around and down the bowl until it reaches the bottom. Then flush. The toilet is now clean. Raise this arm back into the storage position.
The tub's cleaner first has to slightly wet the tub, in case it's dry, then lay down a bleach powder. It should wait about 5 to 10 minutes, then lower a brush arm, and scrub the powder around. It should then rinse the powder off, whereupon it will take away all the dried soap and dirt with it. Then store the scrubber heads.
Sinks would consist of a similar treatment to the tub, but less extensive and with a shorter waiting time.
Lastly, the tub should have a plughole cleaner, because people often lose hairs while showering without noticing. (At any given time, 90% of your hair is growing and 10% is falling out and being replaced with a new hair.) Removing these hairs often gets quite disgusting, as the hair is wet and dried repeatedly before the user notices. By the time it's cleaned, the hair is a vile smelling, soggy mess. Yuck.
For best results, the shower should be seal-able, as this allows the user to enjoy a hot shower with less hot water, doesn't steam up any nearby mirrors, and prevents mildew from forming.
The bathroom is a room of entirely fixed features. There's a toilet to dispose of waste, a bathtub or shower to clean off one's body, and a sink for hand washing. All of these tend to collect the filth they remove from your body and periodically need to be cleaned, a nasty chore frequently mentioned in rants about how people hate housework.
Since none of these move, and cleaning consists of the exact same motions every time, automation is fairly straight forward. Robotic arms attached to the ceiling can, on a cue or timed signal, lower a scrubbing brush with a cleaning-solution squirter, squirt solution, scrub, and flush away. Human maintenance would only involve installing the system, setting up the cue (Press button to clean, or "Run every day at 3am"), and refilling the cleaning solution tanks. The brushes should also be interchangeable, as they wear out with use. They will need to be replaced when that happens.
The toilet's cleaner consists of a squirting nozzle and a brush. Squirt solution on the side of the bowl, then swirl the brush down the around and down the bowl until it reaches the bottom. Then flush. The toilet is now clean. Raise this arm back into the storage position.
The tub's cleaner first has to slightly wet the tub, in case it's dry, then lay down a bleach powder. It should wait about 5 to 10 minutes, then lower a brush arm, and scrub the powder around. It should then rinse the powder off, whereupon it will take away all the dried soap and dirt with it. Then store the scrubber heads.
Sinks would consist of a similar treatment to the tub, but less extensive and with a shorter waiting time.
Lastly, the tub should have a plughole cleaner, because people often lose hairs while showering without noticing. (At any given time, 90% of your hair is growing and 10% is falling out and being replaced with a new hair.) Removing these hairs often gets quite disgusting, as the hair is wet and dried repeatedly before the user notices. By the time it's cleaned, the hair is a vile smelling, soggy mess. Yuck.
For best results, the shower should be seal-able, as this allows the user to enjoy a hot shower with less hot water, doesn't steam up any nearby mirrors, and prevents mildew from forming.
Monday, November 17, 2008
Antibiotic Resistant
When antibiotics were first invented, they were hailed as a magic bullet that would solve all disease forever. Miraculously, the same pill cured a number of diseases at the same time. Infections killed many fewer people, as the bacteria that caused the disease could not endure the chemical onslaught.
Unfortunately, the laypeople thought that antibiotics were effectively magic anti-disease pills. Even today, there are people who want antibiotics prescribed to them to cure colds, which are caused by viruses and not affected by antibiotics. Agriculture has been giving animals antibiotics for a prophylactic use, and none of this happens in a vacuum.
Bacteria existed before humans, and would easily survive almost all the events that would end humans. They reproduce every 15 minutes on average, and have a long history of chemical warfare against each other, and adaptation to survive harsher and harsher environments. It wasn't long before one bacteria genetically developed a way to survive the presence of antibiotic drugs. People using the antibiotics in strange ways gave this bacteria an advantage, since stopping and starting the treatment gave it time to recover and spread its genes to the non-resistant. (Bacteria are capable of sharing some of their genes by swapping, even without reproducing. This sharing helps them survive environments that change frequently.)
Nowadays, antibiotic resistant bacteria are quite common, and more and more antibiotics are becoming useless. There are even bacteria that require antibiotics to survive. Researchers are using this by making genetically engineered bacteria that require antibiotics. Should the bacteria become contaminated with unwanted genes, they can be killed off by ending the supply of antibiotics. It also keeps some of the wild germs out. However, this is quite bad for us, since the usual means of treating disease isn't working so great anymore, and if somebody comes to the hospital with a serious infection, they could die of it as easily as before antibiotics were invented.
This are two pieces of good news here. One is that an alternative has been developed. In the Soviet Union, antibiotics were in short supply, so doctors there developed viruses that destroyed common bacteria, and went inert when there were no more bacterial cells to infect. They call this "Bacteriophage," from "Bacteria," and "phage," Greek for "eating." Bacteria do not have any real defense against bacteriophages, which adapt and change like any other virus. The down side of that is that bacteriophages are specific to the type of bacteria, and must be specially developed. The process isn't difficult, but does involve culturing the original disease, identifying it, and injecting the patient with the correct phage. The process takes at least 2 days.
The other piece of good news is that antibiotic resistant bacteria gain their resistance at a cost, and are, in the absence of antibiotics, weaker than their non-resistant brethren. Pre-antibiotic treatments should prove extra effective, particularly eating sterile food, drinking sterile water, rest, and perhaps non-antibiotic drugs, if available. If the patient avoids antibiotic drugs, recovery is likely.
What you can do to keep antibiotics useful is to use them only when a doctor tells you to, to follow the instructions exactly, and to finish the entire prescription even if you feel better. (You feel better before all bacteria are destroyed.) This way, you will not breed antibiotic resistant bacteria. If you are a doctor, phage therapy shows significant promise.
Unfortunately, the laypeople thought that antibiotics were effectively magic anti-disease pills. Even today, there are people who want antibiotics prescribed to them to cure colds, which are caused by viruses and not affected by antibiotics. Agriculture has been giving animals antibiotics for a prophylactic use, and none of this happens in a vacuum.
Bacteria existed before humans, and would easily survive almost all the events that would end humans. They reproduce every 15 minutes on average, and have a long history of chemical warfare against each other, and adaptation to survive harsher and harsher environments. It wasn't long before one bacteria genetically developed a way to survive the presence of antibiotic drugs. People using the antibiotics in strange ways gave this bacteria an advantage, since stopping and starting the treatment gave it time to recover and spread its genes to the non-resistant. (Bacteria are capable of sharing some of their genes by swapping, even without reproducing. This sharing helps them survive environments that change frequently.)
Nowadays, antibiotic resistant bacteria are quite common, and more and more antibiotics are becoming useless. There are even bacteria that require antibiotics to survive. Researchers are using this by making genetically engineered bacteria that require antibiotics. Should the bacteria become contaminated with unwanted genes, they can be killed off by ending the supply of antibiotics. It also keeps some of the wild germs out. However, this is quite bad for us, since the usual means of treating disease isn't working so great anymore, and if somebody comes to the hospital with a serious infection, they could die of it as easily as before antibiotics were invented.
This are two pieces of good news here. One is that an alternative has been developed. In the Soviet Union, antibiotics were in short supply, so doctors there developed viruses that destroyed common bacteria, and went inert when there were no more bacterial cells to infect. They call this "Bacteriophage," from "Bacteria," and "phage," Greek for "eating." Bacteria do not have any real defense against bacteriophages, which adapt and change like any other virus. The down side of that is that bacteriophages are specific to the type of bacteria, and must be specially developed. The process isn't difficult, but does involve culturing the original disease, identifying it, and injecting the patient with the correct phage. The process takes at least 2 days.
The other piece of good news is that antibiotic resistant bacteria gain their resistance at a cost, and are, in the absence of antibiotics, weaker than their non-resistant brethren. Pre-antibiotic treatments should prove extra effective, particularly eating sterile food, drinking sterile water, rest, and perhaps non-antibiotic drugs, if available. If the patient avoids antibiotic drugs, recovery is likely.
What you can do to keep antibiotics useful is to use them only when a doctor tells you to, to follow the instructions exactly, and to finish the entire prescription even if you feel better. (You feel better before all bacteria are destroyed.) This way, you will not breed antibiotic resistant bacteria. If you are a doctor, phage therapy shows significant promise.
Monday, November 10, 2008
Operating Systems
Yesterday, a man asked me what the point of operating systems was.
It's technically possible to do without one, and the first computers had none. The first computers were designed to do one task, and one task alone. If you wanted it to do a different thing, you had to rebuild the computer.
The first development consisted of storing not just the data in memory, but the program too. Now it was possible to have different programs, although only one at a time, and they still had to be written in direct machine language (which is a pain in the ass for even professional programmers.)
Finally, simple operating systems developed. Now you had a standardized environment, adaptable to different computers, that could accept programs written in higher level languages. The personal computer was now possible, although it still didn't happen yet, because computers were still too expensive.
Time-sharing was the next obvious idea, because the institutions that owned computers had many users, and much of a computer's time was spent waiting. One could divide the CPU's attention between all the users in such a way that each could feel they were the computer's only user. This was far more productive -- instead of running one process at a time, one process that spent most of its time waiting, now the computer was running thousands of processes in tiny slices of time. Each ran as fast as if it were the only one, so more of the CPU's time was spent working instead of waiting.
When personal computers became more common than servers, GUIs became popular. The Graphical User Interface (GUI), is more popular than the alternative (Command Line Interface, or "CLI") because most people can point and click faster than they can type. A few expert users prefer CLIs, certainly, but they're a minority of users. Of course, time sharing was still incorporated, because users do want to do more than one thing at a time.
A modern OS must do the following:
* Manage resources such as drivers and memory so that no two programs claim the same piece at the same time (which would be extremely bad.)
* Control the disks so that programs can merely manage files, not disk blocks. Files are simpler for the program.
* Control what program runs when (how much priority does "My budget spreadsheet" have over "Play music" or "Solitaire"?)
* Make sure that no resource runs out. (You get warned if you run low on memory.)
* Manage caches to speed up all access
* Provide a standard interface to programs so that program writers don't have to worry about that crap. Also, programs operate in a similar fashion, so it's easier for the users.
* Be able to stop programs that endanger the operation of the computer (such as programs with endless loops, programs that try to alter each other)
* Most recently, attempt to stop malicious programs, like spyware and viruses.
OSes have significantly bloated over the years. Windows Vista requires more than a gigabyte of RAM to run, and even the tiniest Linux wants 16 megabytes. By comparison, the first personal computer operating systems ran in a few kilobytes. Admittedly, priorities are different now, since memory is now cheap and expansive, and user time more precious, while back then memory was painfully expensive and had to be conserved at all costs.
There is one reversal to this. Microsoft is planning a new release of Windows, which they're only calling "7" for now, which can be installed in 25MB. More typical usage requires 250MB, which most users definitely have now. Why the sudden interest in efficiency? While the typical desktop machine has increasingly expanded capability, there is a big market in notebook computers, which are kind of minimal in capability to maximize battery time. There are also embedded computers, which have little memory and no hard disk, and often boot from specialized Read Only Memory (ROM) chips. Microsoft definitely has an interest in increasing its share of notebook and embedded markets. (Embedded computers are used when the computer will never be directly used by the user, and are typically contained in another device, like a car. The car contains a computer, and you never directly use it, but the car wouldn't work right without the little computer.)
So in short, if you like your computer to do more than one thing, more than one thing at a time, if you like having a variety of cheap programs, and you like your programs to have some standardization, like "CTRL + Q" for "quit" instead of whatever sequence a programmer felt like using today (ESC + CTRL + W + Q + 8), then you definitely want to have an operating system.
It's technically possible to do without one, and the first computers had none. The first computers were designed to do one task, and one task alone. If you wanted it to do a different thing, you had to rebuild the computer.
The first development consisted of storing not just the data in memory, but the program too. Now it was possible to have different programs, although only one at a time, and they still had to be written in direct machine language (which is a pain in the ass for even professional programmers.)
Finally, simple operating systems developed. Now you had a standardized environment, adaptable to different computers, that could accept programs written in higher level languages. The personal computer was now possible, although it still didn't happen yet, because computers were still too expensive.
Time-sharing was the next obvious idea, because the institutions that owned computers had many users, and much of a computer's time was spent waiting. One could divide the CPU's attention between all the users in such a way that each could feel they were the computer's only user. This was far more productive -- instead of running one process at a time, one process that spent most of its time waiting, now the computer was running thousands of processes in tiny slices of time. Each ran as fast as if it were the only one, so more of the CPU's time was spent working instead of waiting.
When personal computers became more common than servers, GUIs became popular. The Graphical User Interface (GUI), is more popular than the alternative (Command Line Interface, or "CLI") because most people can point and click faster than they can type. A few expert users prefer CLIs, certainly, but they're a minority of users. Of course, time sharing was still incorporated, because users do want to do more than one thing at a time.
A modern OS must do the following:
* Manage resources such as drivers and memory so that no two programs claim the same piece at the same time (which would be extremely bad.)
* Control the disks so that programs can merely manage files, not disk blocks. Files are simpler for the program.
* Control what program runs when (how much priority does "My budget spreadsheet" have over "Play music" or "Solitaire"?)
* Make sure that no resource runs out. (You get warned if you run low on memory.)
* Manage caches to speed up all access
* Provide a standard interface to programs so that program writers don't have to worry about that crap. Also, programs operate in a similar fashion, so it's easier for the users.
* Be able to stop programs that endanger the operation of the computer (such as programs with endless loops, programs that try to alter each other)
* Most recently, attempt to stop malicious programs, like spyware and viruses.
OSes have significantly bloated over the years. Windows Vista requires more than a gigabyte of RAM to run, and even the tiniest Linux wants 16 megabytes. By comparison, the first personal computer operating systems ran in a few kilobytes. Admittedly, priorities are different now, since memory is now cheap and expansive, and user time more precious, while back then memory was painfully expensive and had to be conserved at all costs.
There is one reversal to this. Microsoft is planning a new release of Windows, which they're only calling "7" for now, which can be installed in 25MB. More typical usage requires 250MB, which most users definitely have now. Why the sudden interest in efficiency? While the typical desktop machine has increasingly expanded capability, there is a big market in notebook computers, which are kind of minimal in capability to maximize battery time. There are also embedded computers, which have little memory and no hard disk, and often boot from specialized Read Only Memory (ROM) chips. Microsoft definitely has an interest in increasing its share of notebook and embedded markets. (Embedded computers are used when the computer will never be directly used by the user, and are typically contained in another device, like a car. The car contains a computer, and you never directly use it, but the car wouldn't work right without the little computer.)
So in short, if you like your computer to do more than one thing, more than one thing at a time, if you like having a variety of cheap programs, and you like your programs to have some standardization, like "CTRL + Q" for "quit" instead of whatever sequence a programmer felt like using today (ESC + CTRL + W + Q + 8), then you definitely want to have an operating system.
Saturday, November 1, 2008
Home power
The average home uses 1.5kw of energy. Traditionally, homes connect to a city power grid that sells them electricity from generating plants. Customers are charged per "kilowatt hour," (as in, 1kw * 1 hour, a unit of energy equal to 3,600,000 joules). I have heard prices quoted as low as 5 cents per KWH, and as high as 20 cents per KWH. Assuming 10 cents per kwh, the average home's bill is 36 cents per day.
However, with a few clever technologies, these homes could easily power themselves. This would prove a major advantage if the grid ever shuts down, as it does during earthquakes, hurricanes, tornadoes, and other disasters, or as it sometimes does by accident. (Sometimes areas of the grid spike with power and must be shutdown to avoid destroying equipment. Sometimes the shutdown is an accident, but it takes a while to start up again.)
Solar power varies by insolation. In the United States, western Arizona gets the most sun, collecting 6.5 KWH per square meter of panel per day. A region near the northern section of the Idaho/Montana border gets the least at 3 KWH per square meter per day. If homes in these two regions wanted to power their homes by solar panel, backed by large batteries, the home in Arizona would need 6 m^2 of panel, but the home in Idaho or Montana would need 12 m^2 of panel.
Diesel powered generators are common for emergencies. They are not frequently used, as they produce only about 400 watts, require constant fueling to run, and are noisy, stinky, and inconvenient. Still, many people have them because it's better than not having power.
Radioisotope power really comes in handy. An organization, the Idaho National Laboratory, currently makes radioisotope electric generators for spacecraft. These consist of cylinders, about 6 feet long and perhaps 1 foot in diameter, which produce a slow, constant source of electricity from the heat of nuclear waste. Each cylinder produces 300 watts of power with no radiation. (Aparently, the waste is mostly alpha emitters. Alpha radiation could not make it through the outer jacket. In fact, if you held an alpha-emitting metal in your hand, your skin would block all the radiation. Just don't eat it.) At 300 watts each, 5 cylinders would power the home, and could easily be stashed in a basement corner and forgotten.
Wind power is possible in some areas, but only where the wind blows reasonably consistently. I can think of only two places where this holds, one is in South Dakota, one is in California, and both are somewhat free of housing at the moment. Wind power would use quite a lot of space, so probably isn't a good idea for housing power.
Prior experiments have proven that home generated power is perfectly compatible with grid power. In fact, most regions require the utility company to buy back power you produce should use produce more electricity than you use. In such a case, your electric meter literally runs backwards. The utility companies complained bitterly, and most have significantly slowed the rate at which the meter turns backwards, but it does indeed turn backwards and does indeed reduce your bill. In most of these regions, they had little reason to complain. California passed such laws in hindsight of a massive brownout problem due to a lack of capacity, and the utility company's argument looked very stupid indeed in light of that.
Since most grid power in the United States is powered by coal and hydroelectric power, conserving this has a major environmental impact. Coal smoke smells, and lower demand means less burned coal.
However, with a few clever technologies, these homes could easily power themselves. This would prove a major advantage if the grid ever shuts down, as it does during earthquakes, hurricanes, tornadoes, and other disasters, or as it sometimes does by accident. (Sometimes areas of the grid spike with power and must be shutdown to avoid destroying equipment. Sometimes the shutdown is an accident, but it takes a while to start up again.)
Solar power varies by insolation. In the United States, western Arizona gets the most sun, collecting 6.5 KWH per square meter of panel per day. A region near the northern section of the Idaho/Montana border gets the least at 3 KWH per square meter per day. If homes in these two regions wanted to power their homes by solar panel, backed by large batteries, the home in Arizona would need 6 m^2 of panel, but the home in Idaho or Montana would need 12 m^2 of panel.
Diesel powered generators are common for emergencies. They are not frequently used, as they produce only about 400 watts, require constant fueling to run, and are noisy, stinky, and inconvenient. Still, many people have them because it's better than not having power.
Radioisotope power really comes in handy. An organization, the Idaho National Laboratory, currently makes radioisotope electric generators for spacecraft. These consist of cylinders, about 6 feet long and perhaps 1 foot in diameter, which produce a slow, constant source of electricity from the heat of nuclear waste. Each cylinder produces 300 watts of power with no radiation. (Aparently, the waste is mostly alpha emitters. Alpha radiation could not make it through the outer jacket. In fact, if you held an alpha-emitting metal in your hand, your skin would block all the radiation. Just don't eat it.) At 300 watts each, 5 cylinders would power the home, and could easily be stashed in a basement corner and forgotten.
Wind power is possible in some areas, but only where the wind blows reasonably consistently. I can think of only two places where this holds, one is in South Dakota, one is in California, and both are somewhat free of housing at the moment. Wind power would use quite a lot of space, so probably isn't a good idea for housing power.
Prior experiments have proven that home generated power is perfectly compatible with grid power. In fact, most regions require the utility company to buy back power you produce should use produce more electricity than you use. In such a case, your electric meter literally runs backwards. The utility companies complained bitterly, and most have significantly slowed the rate at which the meter turns backwards, but it does indeed turn backwards and does indeed reduce your bill. In most of these regions, they had little reason to complain. California passed such laws in hindsight of a massive brownout problem due to a lack of capacity, and the utility company's argument looked very stupid indeed in light of that.
Since most grid power in the United States is powered by coal and hydroelectric power, conserving this has a major environmental impact. Coal smoke smells, and lower demand means less burned coal.
Subscribe to:
Posts (Atom)