http://www.newscientist.com/article/dn21495-usb-stick-can-sequence-dna-in-seconds.html
Quotea little gadget that can sequence DNA while plugged into your laptop
Quotethe DNA does not need to be amplified
Quotecan sequence DNA strands as long as 10,000 bases continuously
Quote
the MinION would take about 6 hours to complete a human genome
QuoteEach unit is expected to cost $900 when it goes on sale later this year
WhattheIdon'teven. No PCR, no shotgun sequencing, speed comparable to pirosequencing, fits in the palm of your hand, and COSTS LESS THAN 1000 DOLLARS.
How it works -
QuoteOxford Nanopore is also building a larger device, GridION, for lab use. Both GridION and MinION operate using the same technology: DNA is added to a solution containing enzymes that bind to the end of each strand. When a current is applied across the solution these enzymes and DNA are drawn to hundreds of wells in a membrane at the bottom of the solution, each just 10 micrometres in diameter.
Within each well is a modified version of the protein alpha hemolysin (AHL), which has a hollow tube just 10 nanometres wide at its core. As the DNA is drawn to the pore the enzyme attaches itself to the AHL and begins to unzip the DNA, threading one strand of the double helix through the pore. The unique electrical characteristics of each base disrupt the current flowing through each pore, enough to determine which of the four bases is passing through it. Each disruption is read by the device, like a tickertape reader.
This is science fiction territory, people. Combine one of these with an iphone, and you have damn near a tricorder.
My only question in this is "are the MinIONs one time use or mult-use?" Because if they are multi-use /I. Want. One./
That's cool as hell.
I'd like to know more about why they used AHL. Like, did they go "hey look, that tubey thing that MRSA uses to breach cell membranes, and eventually pop them to feast on their iron would do the trick"?
OK, should have googled. It seems to be fairly established as a technology. http://www.ks.uiuc.edu/Research/hemolysin/
OK, not so well established. WATCH ME FUMBLE WITH GOOGLE ALL NIGHT. 15 years of research.
according to the Gizmag article i saw on this, it's a 'disposable' unit.
it also says we can expect the price to drop significantly as production gins up.
Quote from: Iptuous on February 20, 2012, 03:36:40 AM
according to the Gizmag article i saw on this, it's a 'disposable' unit.
it also says we can expect the price to drop significantly as production gins up.
Yeah. The DamION seems to be multi use though. Still probably much smaller and cheaper than any pirosequencer out there.
For those of you who are not familiar, there are 3 (now 4) generations of sequencing technology.
The first is Sanger Sequencing (http://www.genetic-inference.co.uk/blog/2009/04/basics-sequencing-dna-part-1/), which uses a process called a Polymerase Chain Reaction (or PCR for short). PCR was a radical discovery (apparently discovered after an acid trip) by Kary Mullis (http://en.wikipedia.org/wiki/Kary_Mullis).
The general idea is this: you have one DNA strand and you want a whole bunch of copies. Now, you know that when you heat DNA, the double helix pulls apart into the two complimentary strands, and you know that if you added a DNA polymerase (a protein that finds single strands and builds the compliment to them) You will get two double stranded DNA helixes. Now, the problem is most DNA polymerase doesn't like getting heated, it tends to denature. So Mullis looked for bacteria in thermal hot springs and used the DNA polymerase from those. Suddenly, you could add this "taq polymerase" protein which doesn't denature under high heat to the mix, add a primer which will attach to your gene of interest, run the mixture through a successive hot-warm-hot sequence of water baths, and come out with a huge amount of DNA. Every time it goes in the hot water bath, it denatures, every time it goes in the warm water bath, the taq polymerase makes a complimentary strand.
So, now you have a whole lot of DNA. But the DNA isn't the entire length of the gene, because you've added base pairs to the solution that are a little broken, and these will randomly be used to cap the length. Which means you have a whole bunch of different lengths of DNA. And if you include only one type of broken base pair (say, a G (guanine)) in the mix, then all the lengths will be all capped at places where a Guanine would attach. Do this with the other three basepairs, and now you can place each basepair cap in it's own row on a gel electrophoresis setup, and the electric dipoles will pull the shorter strands faster than the longer ones. Basically, you'll have a visual matrix of the sequence, with each basepair in the position by length of the strand. This was updated from the gel setup into capillary tubes, but it still is rather low tech, and requires a huge amount of space to do a goodly amount of sequence. It takes years to sequence a human genome this way. Incidently, this is the method that was used for the Human Genome Project.
Second generation sequencing (http://www.genomesunzipped.org/2010/09/basics-second-generation-sequencing.php) is somewhat the same, but much faster. It still uses PCR to amplify the genes, and still uses these broken basepairs, but instead of the kind previously, it uses ones that have a dye attached. This is called Dye Terminator Sequencing. Wash away the other basepairs, and you can clearly see the color. Now use an enzyme to cut off the dye terminator, and add another dye labeled basepair. Rinse (literally), repeat, and by the sequence of colors, you will have the sequence of the DNA. But the major problem with this is that, while it's faster than Sanger sequencing, it still takes a long time. You can do this in high thoroughput microarrays, but it still is time consuming. It also can't sequence long segments of DNA.
The solution to this has been something called shotgun sequencing, where the DNA is cut up into a whole bunch of manageable bits, and then later reassembled by software. This is the method that was used by Craig Venter in his ocean water sample sequencing work.
Third generation is called Pirosequencing, where instead of a dye, a light of different frequencies is detected whenever the next basepair is added (this is simplifying a bit; there are a bunch of other molecules involved). That light is different for the individual basepairs, so the output is a graph of the sequence order by different wavelengths of light. This is faster, but it still requires PCR, and it makes shorter sequence lengths than even first generation Sanger sequencing. But with shotgun sequencing you only get short fragments anyway. When I was in grad school, this was the method that made you drool. Third gen was wild stuff.
Fourth generation includes stuff like the nanopore technology. It's fast, it's cheap, and it can sequence very long strands of DNA without cutting them up first. PCR and shotgun sequencing no longer needed to map an entire genome. I hope now that I've described the above methods, you'll understand why my mind is blown. Compared to 4th gen, 1st gen is like banging rocks together. And it is /still/ amazing. You could still, for example, use PCR to amplyfy a particular gene in microarray, with each well being a different sample, and use nanopore electrosequencing to sequence them all at once, very quickly. Mitochondrial genomes are becoming the standard for molecular identification of animal species, for example, and you could sequence a whole bunch of those in no time at all through this method.
Awesome post, Kai! :)
it really puts the advances in perspective.
if you were to speculate about the possible repercussions of being able to nigh-instantly and very cheaply sequence an individual entity (with particular interest in humans), what would you see as the most radical of possibilities?
how might medicine most effectively be able to use the wealth of data if the entire population had their genome sequenced?
Quote from: Iptuous on February 20, 2012, 03:17:08 PM
Awesome post, Kai! :)
it really puts the advances in perspective.
if you were to speculate about the possible repercussions of being able to nigh-instantly and very cheaply sequence an individual entity (with particular interest in humans), what would you see as the most radical of possibilities?
how might medicine most effectively be able to use the wealth of data if the entire population had their genome sequenced?
If you had the entire genomes of a large portion of the population, and could correlate genetic diseases and disorders to these, the causes of some of these would become quickly obvious. You could determine, from birth, what medical issues a person may be subceptable to later in life. You could tailor treatment for individuals. On the pathogen side, diagnosis would become very easy.
These are of course, conservative estimates.
In the realm of radical speculation, this may very well be the step needed before people start custom wetware augmentation. I'm talking gene therapy, laboratory organ growth, part replacement and enhancement, grafting...I mean, if you can figure out how to screw with human development, turning cells back into pluripotent stem cells and guiding them through tissue growth, you could do practically anything. Want vision as good as a hawk? Or smell as good as a bloodhound? But lets consider this: when real life furries are walking around, how weird is queer going to be?
the conservative estimates are incredible. i'm curious what sort of effort would be required to implement them. is the computation to correlate this volume of data in place already, or would it require additional advancement?
how could this be organized? is there a medical authority that could handle the task?
it seems the possibilities are a shining jewel. an irresistible lure.
i'm game. let's do it!
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 03:01:25 PM
Quote from: Iptuous on February 20, 2012, 03:36:40 AM
according to the Gizmag article i saw on this, it's a 'disposable' unit.
it also says we can expect the price to drop significantly as production gins up.
Yeah. The DamION seems to be multi use though. Still probably much smaller and cheaper than any pirosequencer out there.
For those of you who are not familiar, there are 3 (now 4) generations of sequencing technology.
The first is Sanger Sequencing (http://www.genetic-inference.co.uk/blog/2009/04/basics-sequencing-dna-part-1/), which uses a process called a Polymerase Chain Reaction (or PCR for short). PCR was a radical discovery (apparently discovered after an acid trip) by Kary Mullis (http://en.wikipedia.org/wiki/Kary_Mullis).
The general idea is this: you have one DNA strand and you want a whole bunch of copies. Now, you know that when you heat DNA, the double helix pulls apart into the two complimentary strands, and you know that if you added a DNA polymerase (a protein that finds single strands and builds the compliment to them) You will get two double stranded DNA helixes. Now, the problem is most DNA polymerase doesn't like getting heated, it tends to denature. So Mullis looked for bacteria in thermal hot springs and used the DNA polymerase from those. Suddenly, you could add this "taq polymerase" protein which doesn't denature under high heat to the mix, add a primer which will attach to your gene of interest, run the mixture through a successive hot-warm-hot sequence of water baths, and come out with a huge amount of DNA. Every time it goes in the hot water bath, it denatures, every time it goes in the warm water bath, the taq polymerase makes a complimentary strand.
So, now you have a whole lot of DNA. But the DNA isn't the entire length of the gene, because you've added base pairs to the solution that are a little broken, and these will randomly be used to cap the length. Which means you have a whole bunch of different lengths of DNA. And if you include only one type of broken base pair (say, a G (guanine)) in the mix, then all the lengths will be all capped at places where a Guanine would attach. Do this with the other three basepairs, and now you can place each basepair cap in it's own row on a gel electrophoresis setup, and the electric dipoles will pull the shorter strands faster than the longer ones. Basically, you'll have a visual matrix of the sequence, with each basepair in the position by length of the strand. This was updated from the gel setup into capillary tubes, but it still is rather low tech, and requires a huge amount of space to do a goodly amount of sequence. It takes years to sequence a human genome this way. Incidently, this is the method that was used for the Human Genome Project.
Second generation sequencing (http://www.genomesunzipped.org/2010/09/basics-second-generation-sequencing.php) is somewhat the same, but much faster. It still uses PCR to amplify the genes, and still uses these broken basepairs, but instead of the kind previously, it uses ones that have a dye attached. This is called Dye Terminator Sequencing. Wash away the other basepairs, and you can clearly see the color. Now use an enzyme to cut off the dye terminator, and add another dye labeled basepair. Rinse (literally), repeat, and by the sequence of colors, you will have the sequence of the DNA. But the major problem with this is that, while it's faster than Sanger sequencing, it still takes a long time. You can do this in high thoroughput microarrays, but it still is time consuming. It also can't sequence long segments of DNA.
Fourth generation includes stuff like the nanopore technology. It's fast, it's cheap, and it can sequence very long strands of DNA without cutting them up first. PCR and shotgun sequencing no longer needed to map an entire genome. I hope now that I've described the above methods, you'll understand why my mind is blown. Compared to 4th gen, 1st gen is like banging rocks together. And it is /still/ amazing. You could still, for example, use PCR to amplyfy a particular gene in microarray, with each well being a different sample, and use nanopore electrosequencing to sequence them all at once, very quickly. Mitochondrial genomes are becoming the standard for molecular identification of animal species, for example, and you could sequence a whole bunch of those in no time at all through this method.
Hey, this is the stuff b used to do for a living, before his lab had some funding trouble and he had to go find another job. (The lab did eventually get its funding.)
I don't know the details, I just know that he designed genetic sequencing arrays.
Here's one of his older group papers: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1853129/
Quote from: Iptuous on February 20, 2012, 03:59:52 PM
the conservative estimates are incredible. i'm curious what sort of effort would be required to implement them. is the computation to correlate this volume of data in place already, or would it require additional advancement?
how could this be organized? is there a medical authority that could handle the task?
it seems the possibilities are a shining jewel. an irresistible lure.
i'm game. let's do it!
You would need this generations supercomputers. Despite a human genome being about 750 mb, that's a lot of data points to match. We can cut down on that since we have the human genome mapped, but it still needs a massive amount of processing power. The other issue I suspect will be data privacy. There are going to be problems with piracy and insurance companies screwing people over.
Quote from: Nigel on February 20, 2012, 04:09:21 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 03:01:25 PM
Quote from: Iptuous on February 20, 2012, 03:36:40 AM
according to the Gizmag article i saw on this, it's a 'disposable' unit.
it also says we can expect the price to drop significantly as production gins up.
Yeah. The DamION seems to be multi use though. Still probably much smaller and cheaper than any pirosequencer out there.
For those of you who are not familiar, there are 3 (now 4) generations of sequencing technology.
The first is Sanger Sequencing (http://www.genetic-inference.co.uk/blog/2009/04/basics-sequencing-dna-part-1/), which uses a process called a Polymerase Chain Reaction (or PCR for short). PCR was a radical discovery (apparently discovered after an acid trip) by Kary Mullis (http://en.wikipedia.org/wiki/Kary_Mullis).
The general idea is this: you have one DNA strand and you want a whole bunch of copies. Now, you know that when you heat DNA, the double helix pulls apart into the two complimentary strands, and you know that if you added a DNA polymerase (a protein that finds single strands and builds the compliment to them) You will get two double stranded DNA helixes. Now, the problem is most DNA polymerase doesn't like getting heated, it tends to denature. So Mullis looked for bacteria in thermal hot springs and used the DNA polymerase from those. Suddenly, you could add this "taq polymerase" protein which doesn't denature under high heat to the mix, add a primer which will attach to your gene of interest, run the mixture through a successive hot-warm-hot sequence of water baths, and come out with a huge amount of DNA. Every time it goes in the hot water bath, it denatures, every time it goes in the warm water bath, the taq polymerase makes a complimentary strand.
So, now you have a whole lot of DNA. But the DNA isn't the entire length of the gene, because you've added base pairs to the solution that are a little broken, and these will randomly be used to cap the length. Which means you have a whole bunch of different lengths of DNA. And if you include only one type of broken base pair (say, a G (guanine)) in the mix, then all the lengths will be all capped at places where a Guanine would attach. Do this with the other three basepairs, and now you can place each basepair cap in it's own row on a gel electrophoresis setup, and the electric dipoles will pull the shorter strands faster than the longer ones. Basically, you'll have a visual matrix of the sequence, with each basepair in the position by length of the strand. This was updated from the gel setup into capillary tubes, but it still is rather low tech, and requires a huge amount of space to do a goodly amount of sequence. It takes years to sequence a human genome this way. Incidently, this is the method that was used for the Human Genome Project.
Second generation sequencing (http://www.genomesunzipped.org/2010/09/basics-second-generation-sequencing.php) is somewhat the same, but much faster. It still uses PCR to amplify the genes, and still uses these broken basepairs, but instead of the kind previously, it uses ones that have a dye attached. This is called Dye Terminator Sequencing. Wash away the other basepairs, and you can clearly see the color. Now use an enzyme to cut off the dye terminator, and add another dye labeled basepair. Rinse (literally), repeat, and by the sequence of colors, you will have the sequence of the DNA. But the major problem with this is that, while it's faster than Sanger sequencing, it still takes a long time. You can do this in high thoroughput microarrays, but it still is time consuming. It also can't sequence long segments of DNA.
Fourth generation includes stuff like the nanopore technology. It's fast, it's cheap, and it can sequence very long strands of DNA without cutting them up first. PCR and shotgun sequencing no longer needed to map an entire genome. I hope now that I've described the above methods, you'll understand why my mind is blown. Compared to 4th gen, 1st gen is like banging rocks together. And it is /still/ amazing. You could still, for example, use PCR to amplyfy a particular gene in microarray, with each well being a different sample, and use nanopore electrosequencing to sequence them all at once, very quickly. Mitochondrial genomes are becoming the standard for molecular identification of animal species, for example, and you could sequence a whole bunch of those in no time at all through this method.
Hey, this is the stuff b used to do for a living, before his lab had some funding trouble and he had to go find another job. (The lab did eventually get its funding.)
I don't know the details, I just know that he designed genetic sequencing arrays.
Here's one of his older group papers: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1853129/
That's using hybridization to find single nucleotide polymorphisms (SNPs). This won't even be necessary with whole genome sequencing, because you'll just use software to find the genes of interest after sequencing.
It's good to see we occasionally get the COOL kind of the future, instead of just the EEEEEEEEEEeeee part of it.
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 04:46:11 PM
Quote from: Iptuous on February 20, 2012, 03:59:52 PM
the conservative estimates are incredible. i'm curious what sort of effort would be required to implement them. is the computation to correlate this volume of data in place already, or would it require additional advancement?
how could this be organized? is there a medical authority that could handle the task?
it seems the possibilities are a shining jewel. an irresistible lure.
i'm game. let's do it!
You would need this generations supercomputers. Despite a human genome being about 750 mb, that's a lot of data points to match. We can cut down on that since we have the human genome mapped, but it still needs a massive amount of processing power. The other issue I suspect will be data privacy. There are going to be problems with piracy and insurance companies screwing people over.
One of the projects my dad worked on was to link up computers through their local area network in order to handle tasks that the company would normally throw at their supercomputers. Overnight, the computers in the office would be left on and the idle CPUs would each handle a small part of large scale data mapping, with some of the machines watching over the others to make sure it all synched up at the end.
I wonder if something similar could be done via the internet. You'd still need a large amount of capital (and knowledge!) to get the project off the ground, but you might not need a multi-million dollar piece of hardware...
Edit: Derp, he was replacing mainframes not supercomputers. My mistake.
Sure. that's the idea behind seti@home and folding@home...
Amusingly enough, despite the fact that this device is much faster than anything else currently available, it still doesn't come anywhere near how fast people who watch CSI think DNA can be compared.
Besides helping us with human DNA, wouldn't this help a lot with other diseases? It could perhaps make identifying certain diseases very easy, and then allow us much more accurate data into how viruses, bacteria and fungal infection spread.
I don't know how easy it is to isolate these things from the human, so maybe this is completely impractical.
This is some pretty amazing shit, Kai. What do you think will be next, given the progression of events? What would you like to see? When I was in school, we did a lot with the Human Genome Project research, going over what they were doing and extrapolating things in a theoretical sense. Things are advancing rapidly, it's so exciting.
Quote from: el sjaako on February 20, 2012, 08:02:18 PM
Besides helping us with human DNA, wouldn't this help a lot with other diseases? It could perhaps make identifying certain diseases very easy, and then allow us much more accurate data into how viruses, bacteria and fungal infection spread.
I don't know how easy it is to isolate these things from the human, so maybe this is completely impractical.
No, you're correct. That is one of the conservative estimates I listed above.
How these things work can be illustrated by a system already in place, called iBoL (international Barcode of Life). People around the world are sequencing short units of mitochondrial DNA (the Cytochrome Oxidase C subunit I or COI for short) from already identified organisms, uploading these sequences to a database, and other people are comparing their "barcodes" of unknown origin to those of known origin, generating a identification.
This of course won't work for any organism without mitochondria, but with this new system you could just sequence entire pathogen genomes, and allow other people to compare their unknowns to your knowns.
There are problems with this, of course. Contamination is a big issue. In many cases traditional identification techniques are faster and cheaper. The drive behind these systems is that identification, if made cheap enough, could be automated to some extent. You would still need 'quality control', trained taxonomists and microbiologists who can identify by traditional methods.
Boy I tell ya though. Having an automated identification system for aquatic invertebrate samples would be amazing. Sure, there will be errors, hard to separate groups, new species, questionable identifications, but if the vast number of specimens could be identified without my input, that would free up so much more time for the other stuff that really /does/ need my input. I could do more taxonomy because I wouldn't be spending most of my time identifying specimens that do not...well, let's put it this way.
When I was doing my master's research, I had an assistantship where I was taking very large samples of adult aquatic insects (via blacklight trap) in the field. We're talking tens of thousands of insects in each sample, times 4 for each date, every two weeks for an entire collecting season (16 dates). 64 samples. Now, I didn't need to identify all the insects involved, just the caddisflies. But will still be ~30 thousand insects or more by the end of the project. Most of those caddisflies were of a few types, a few species. I looked at thousands of those individuals. I looked at more individuals of those species than probably anyone else has ever. I became very familiar with their morphology and phenology and they were still really interesting, but it was taking me so much time to identify these specimens that I wasn't able to finish the project by myself in the time alloted. These weren't the sort of things you could just glance at and say "that's that species, that's this species" and triage in seconds. I had to look at the male and female genitalia under a stereomicroscope, hold the specimen a certain way so I could discern characters. The females, because of their morphology, usually took me about 5-10 seconds. The males slightly longer. I had to count the number of males and females in each species for each sample, so it was a very tedious affair, my feelings for entomology and taxonomy aside.
It would be really nice if I could have delegated some of that to a machine. Sort out, for example, the males and females of these really abundant groups separately, and run them all through an automated identification system. Since I have the males and females separate (an easy enough task since insect genetalia tends to be radically different between the sexes), I would then be able to get counts without doing each specimen individually myself.
Semi-automated identification would be a huge boon for medical professionals. Especially because, unlike insects, the bacterial kinds are often only distinguished by some very difficult genetic characters.
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 04:50:03 PM
Quote from: Nigel on February 20, 2012, 04:09:21 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 03:01:25 PM
Quote from: Iptuous on February 20, 2012, 03:36:40 AM
according to the Gizmag article i saw on this, it's a 'disposable' unit.
it also says we can expect the price to drop significantly as production gins up.
Yeah. The DamION seems to be multi use though. Still probably much smaller and cheaper than any pirosequencer out there.
For those of you who are not familiar, there are 3 (now 4) generations of sequencing technology.
The first is Sanger Sequencing (http://www.genetic-inference.co.uk/blog/2009/04/basics-sequencing-dna-part-1/), which uses a process called a Polymerase Chain Reaction (or PCR for short). PCR was a radical discovery (apparently discovered after an acid trip) by Kary Mullis (http://en.wikipedia.org/wiki/Kary_Mullis).
The general idea is this: you have one DNA strand and you want a whole bunch of copies. Now, you know that when you heat DNA, the double helix pulls apart into the two complimentary strands, and you know that if you added a DNA polymerase (a protein that finds single strands and builds the compliment to them) You will get two double stranded DNA helixes. Now, the problem is most DNA polymerase doesn't like getting heated, it tends to denature. So Mullis looked for bacteria in thermal hot springs and used the DNA polymerase from those. Suddenly, you could add this "taq polymerase" protein which doesn't denature under high heat to the mix, add a primer which will attach to your gene of interest, run the mixture through a successive hot-warm-hot sequence of water baths, and come out with a huge amount of DNA. Every time it goes in the hot water bath, it denatures, every time it goes in the warm water bath, the taq polymerase makes a complimentary strand.
So, now you have a whole lot of DNA. But the DNA isn't the entire length of the gene, because you've added base pairs to the solution that are a little broken, and these will randomly be used to cap the length. Which means you have a whole bunch of different lengths of DNA. And if you include only one type of broken base pair (say, a G (guanine)) in the mix, then all the lengths will be all capped at places where a Guanine would attach. Do this with the other three basepairs, and now you can place each basepair cap in it's own row on a gel electrophoresis setup, and the electric dipoles will pull the shorter strands faster than the longer ones. Basically, you'll have a visual matrix of the sequence, with each basepair in the position by length of the strand. This was updated from the gel setup into capillary tubes, but it still is rather low tech, and requires a huge amount of space to do a goodly amount of sequence. It takes years to sequence a human genome this way. Incidently, this is the method that was used for the Human Genome Project.
Second generation sequencing (http://www.genomesunzipped.org/2010/09/basics-second-generation-sequencing.php) is somewhat the same, but much faster. It still uses PCR to amplify the genes, and still uses these broken basepairs, but instead of the kind previously, it uses ones that have a dye attached. This is called Dye Terminator Sequencing. Wash away the other basepairs, and you can clearly see the color. Now use an enzyme to cut off the dye terminator, and add another dye labeled basepair. Rinse (literally), repeat, and by the sequence of colors, you will have the sequence of the DNA. But the major problem with this is that, while it's faster than Sanger sequencing, it still takes a long time. You can do this in high thoroughput microarrays, but it still is time consuming. It also can't sequence long segments of DNA.
Fourth generation includes stuff like the nanopore technology. It's fast, it's cheap, and it can sequence very long strands of DNA without cutting them up first. PCR and shotgun sequencing no longer needed to map an entire genome. I hope now that I've described the above methods, you'll understand why my mind is blown. Compared to 4th gen, 1st gen is like banging rocks together. And it is /still/ amazing. You could still, for example, use PCR to amplyfy a particular gene in microarray, with each well being a different sample, and use nanopore electrosequencing to sequence them all at once, very quickly. Mitochondrial genomes are becoming the standard for molecular identification of animal species, for example, and you could sequence a whole bunch of those in no time at all through this method.
Hey, this is the stuff b used to do for a living, before his lab had some funding trouble and he had to go find another job. (The lab did eventually get its funding.)
I don't know the details, I just know that he designed genetic sequencing arrays.
Here's one of his older group papers: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1853129/
That's using hybridization to find single nucleotide polymorphisms (SNPs). This won't even be necessary with whole genome sequencing, because you'll just use software to find the genes of interest after sequencing.
That paper was from 2007, before he got his PhD and started working in the lab I mentioned... I just thought I'd link to a sample of the kind of shit he's into, but I didn't want to link to his doctoral thesis because it has his full real name on it without anyone else's full real names to obscure his ID. :) I'll have to ask him what he thinks of this new technology.
Quote from: Cardinal Pizza Deliverance. on February 20, 2012, 08:16:19 PM
This is some pretty amazing shit, Kai. What do you think will be next, given the progression of events? What would you like to see? When I was in school, we did a lot with the Human Genome Project research, going over what they were doing and extrapolating things in a theoretical sense. Things are advancing rapidly, it's so exciting.
What would I personally like to see next?
Hm, well...
1) The above semi-automated identification systems would be wonderful.
2) One of my major worries in all this is that taxonomic work is going to become distanced further and further from the organism. If sequencing technology were to become mundane, everyday, I think that it would loose it's 'shiny effect'. It would become a normal tool in the toolbox that also includes traditional morphological techniques, and people would use both on a daily basis. I really want to see a return to basic natural history research in biology, where people go out and investigate living things (and previously living things) without thought for "what shiny new technology can I use to look at this". There's a real confusion where people who should be doing scientific work are ending up as lab jockeys because the tech isn't automated enough. The mundanity and ease of use of 4th gen sequencing tech would free up time to do more fundamental natural history work. And that is something I dearly hope for, because it will be to the benefit of natural history collections and to the preservation of our biological heritage.
3) I would love to see gene therapy become a reality. We can do this already with mitochondrial disorders (though not yet allowed on human subjects).
4) On that same note, I would love to see custom wetware augmentation become reality. Both three and four require new knowledge of developmental biology. It will probably be closely tied to pluripotent stem cell creation and laboratory organ growth.
5) Something a colleague of mine pointed out is what we really need is a map of the human epigenome. That is, it's not only the basic code but what genes are turned off when and where that determines the development of an organism. We need a complete map of when and where genes are turned on and off (epigenomics) to do some of this really cool scifi stuff like gene therapy.
Dependance and focus on the shiny tools is always a worry. Look at the uproar when calculators became common place in math classes. Instead of helping people learn math, it became a replacement brain for doing it without the person having to learn the shit at all. Now you have kids who can't calculate change for the fries they're selling.
I can imagine the horrors that would spawn in the scientific realm. Using this technology as a tool to allow people to get back to other aspects of the work and back outside would be great.
Epigenomics is fascinating shit. Mapping the epigenome . . . my brain boggles at the applications. I need to read more about this.
Kai, is your brain available for technical advice for fiction?
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 11:24:04 PM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
DNA modeled nanobots too far outside your scope for comfort?
Quote from: Cardinal Pizza Deliverance. on February 20, 2012, 10:02:13 PM
Dependance and focus on the shiny tools is always a worry. Look at the uproar when calculators became common place in math classes. Instead of helping people learn math, it became a replacement brain for doing it without the person having to learn the shit at all. Now you have kids who can't calculate change for the fries they're selling.
I can imagine the horrors that would spawn in the scientific realm. Using this technology as a tool to allow people to get back to other aspects of the work and back outside would be great.
Epigenomics is fascinating shit. Mapping the epigenome . . . my brain boggles at the applications. I need to read more about this.
And my first thought is, how would you even go about mapping the epigenome. Because unlike the genome, which exists in one dimension (a line of sequence), the epigenome exists in /four/ dimensions: three dimensional space, and time. First of all, I guess, you need to know the proteome for every stage of development, and how it differs between tissues. That's the sum of all proteins and their interactions. Because one of the most important players in epigenetics are transcription factors. I've waxed poetic about them elsewhere on this forum. Transcription factors are proteins that bind to the DNA and either enhance or decrease the expression of whatever genes they are connected to.
At conception (in animals) you get immediate division of cells. The mother puts down a single gradient of a protein that polarizes the zygote into anterior and posterior. This transcription factor upregulates some genes in some areas, and down regulates others in other areas. These genes are also transcription factors. You can see how very quickly these transcription factor gradients cause a three dimentional grid on the zygote, showing when and where various parts form. It's transcription factors all the way until actual structures such as muscle, bone, nerves, etc, are being physically formed.
We know some of these things from mice and vinegar flies, but it's most certainly different in humans because, well, we're morphologically and physiologically different. So, this has been a problem because humans being humans, we're not fond of killing our embryos and fetuses for scientific investigation.
We're getting better at using mice though. There's a huge project being put on by NCSU breeding a variety of strains of mice and producing a computer database of the genes and their interactions in each strain. Epigenomics is a huge undertaking, 100s of times bigger than the human genome project. It's a step below neurology, but still very complicated.
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 11:26:50 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 11:24:04 PM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
DNA modeled nanobots too far outside your scope for comfort?
What exactly do you want to know? I mean, Craig Venter and his crew has been creating synthetic organisms, so a virus-like DNA carrier which can perform a simple task isn't so complicated. The issue is that virii can't do anything themselves, they need a host. Bacteria, on the other hand, can self replicate and perform tasks. People are working on synthetic organisms that will digest various materials, or produce various materials, for example.
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 12:56:17 AM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 11:26:50 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 11:24:04 PM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
DNA modeled nanobots too far outside your scope for comfort?
What exactly do you want to know? I mean, Craig Venter and his crew has been creating synthetic organisms, so a virus-like DNA carrier which can perform a simple task isn't so complicated. The issue is that virii can't do anything themselves, they need a host. Bacteria, on the other hand, can self replicate and perform tasks. People are working on synthetic organisms that will digest various materials, or produce various materials, for example.
Maybe they'll come up with one that eats plastic and produces something interesting.
Quote from: Cardinal Pizza Deliverance. on February 21, 2012, 01:06:29 AM
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 12:56:17 AM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 11:26:50 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 11:24:04 PM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
DNA modeled nanobots too far outside your scope for comfort?
What exactly do you want to know? I mean, Craig Venter and his crew has been creating synthetic organisms, so a virus-like DNA carrier which can perform a simple task isn't so complicated. The issue is that virii can't do anything themselves, they need a host. Bacteria, on the other hand, can self replicate and perform tasks. People are working on synthetic organisms that will digest various materials, or produce various materials, for example.
Maybe they'll come up with one that eats plastic and produces something interesting.
I honestly am at a loss for anything I would like bacteria to make. Whenever I think "petrol" or "biofuel" I feel like a douche. I'm sick of burning carbon chains to get around.
microscopic nanobots for extracting rare earth metals and transporting to the surface, wetware, quantum computing kinda stuff. Obviously there's a lot of different technologies in play, there, but the one I'm going to need to be able to explain in sufficient detail is the programming of self-constructing nanobots that are good for *something*.
Quote from: Queen Gogira Pennyworth, BSW on February 21, 2012, 01:27:38 AM
microscopic nanobots for extracting rare earth metals and transporting to the surface, wetware, quantum computing kinda stuff. Obviously there's a lot of different technologies in play, there, but the one I'm going to need to be able to explain in sufficient detail is the programming of self-constructing nanobots that are good for *something*.
Well, one of the issues with DNA based nanobots is that they will evolve. They are going to have differential survival based on the selections at hand, and if they find a better stable strategy, they are going to take it. Think "Andromeda Strain". Or Ian Malcom in Jurassic Park. If you want to do this, you have to make sure that the synthetic bionanobots have the greatest survival reward when they do exactly what you want them to do, and that there are no other good options. Also, how in the world are they going to transport the materials?
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 01:35:38 AM
Quote from: Queen Gogira Pennyworth, BSW on February 21, 2012, 01:27:38 AM
microscopic nanobots for extracting rare earth metals and transporting to the surface, wetware, quantum computing kinda stuff. Obviously there's a lot of different technologies in play, there, but the one I'm going to need to be able to explain in sufficient detail is the programming of self-constructing nanobots that are good for *something*.
Well, one of the issues with DNA based nanobots is that they will evolve. They are going to have differential survival based on the selections at hand, and if they find a better stable strategy, they are going to take it. Think "Andromeda Strain". Or Ian Malcom in Jurassic Park. If you want to do this, you have to make sure that the synthetic bionanobots have the greatest survival reward when they do exactly what you want them to do, and that there are no other good options. Also, how in the world are they going to transport the materials?
Bucket chain.
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 01:35:38 AM
Well, one of the issues with DNA based nanobots is that they will evolve. They are going to have differential survival based on the selections at hand, and if they find a better stable strategy, they are going to take it. Think "Andromeda Strain". Or Ian Malcom in Jurassic Park.
I'm kind of banking on this. I have no faith in the Mayans, so I'm sort of relying on irresponsible scientists turning every organic thing on the planet into grey goo with the consistency of baby shit.
12CIB DEFAULT DISCLAIMER
11 MEN / Monday eight o'clock report 20:40:20 - BoardSec'$
Today at 07:01:25 AM
Reply #7: Today at 08:09:21 AM 1H8M-4S = 4076000 Ms
Reply #8: Today at 08:46:11 AM ?/?
Reply #9: Today at 08:50:03 AM (50-9{M)-1) 2400S + 60-21+3{S) = 2442000Ms
Reply #17: Today at 01:43:30 PM
Reply #18: Today at 01:48:55 PM 5M25s {Fairly Funny {{4 Me:
03:24:04-02:20:24 {{{ two twos to U2 TooTs | about an hour? does Ms matter
03:26 |-| About Two Min 46s ? Shirley Tp ||| 5:39:53 - 5:35:38 (4:15}
_
My version of these AMvPM Lyrics R as FoLLows:
I have two intel chiped computers | Both have 2 hard Drives C: & D:
This one runs XP on D: & 98 on C: | Currently in XP mode | on Drive D
_
The Other Computer | I call it "BATH" because it stays in the BathRoom
really I have no Bath { cant afford 1 | So Call it a shower if you prefer truth
On Occasion I can Boot 98 {to run matlab on Bath C: & then swich over to DOS on
D to collect Data | thus the Solar Actvity (if there were any}? gets collected
on D. about a Floppy {1.44Mb)/24HrDay | say 10 Meg/wk | i used to ever week
turn off DOS and switch over to C:98 /Matlab to plot the data as a graph ~
-
However Since the First of the Year (2012} (Believe whatever You Like) The BATH
refuses to Boot 98. Not only does Intel refuse to go to 98 ON C But it won't
even ALLow 98 to be reloaded onto C to see if data from the sun can be graphed
:
Anyway its Just inforced stupidity at the BATH ROOM LeveL so I now say:
http://tycho.usno.navy.mil/simpletime.html My Xp clock WAS 17 seconds fast
so? i reset it | yes it does gain time | No i have no plan to fix this. | maybe U
Very cool! I appreciate how you description of the various generations of tech made me go from "Whoa, cool future tech" to "WHOA!!!!! BADASS COOL FUTURE TECH!"
Quote from: The Good Reverend Roger on February 20, 2012, 05:07:30 PM
It's good to see we occasionally get the COOL kind of the future, instead of just the EEEEEEEEEEeeee part of it.
Yeah, this.
Quote from: Telarus on February 21, 2012, 09:23:17 AM
Very cool! I appreciate how you description of the various generations of tech made me go from "Whoa, cool future tech" to "WHOA!!!!! BADASS COOL FUTURE TECH!"
Seriously. PCR is cool enough on it's own. The fact that we, with a protein stolen from hot springs bacteria, can double DNA fragments by putting a nucleic acid soup through a series of water baths of alternating temperature, that is freaking amazing.
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 01:19:33 AM
Quote from: Cardinal Pizza Deliverance. on February 21, 2012, 01:06:29 AM
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 12:56:17 AM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 11:26:50 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 11:24:04 PM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
DNA modeled nanobots too far outside your scope for comfort?
What exactly do you want to know? I mean, Craig Venter and his crew has been creating synthetic organisms, so a virus-like DNA carrier which can perform a simple task isn't so complicated. The issue is that virii can't do anything themselves, they need a host. Bacteria, on the other hand, can self replicate and perform tasks. People are working on synthetic organisms that will digest various materials, or produce various materials, for example.
Maybe they'll come up with one that eats plastic and produces something interesting.
I honestly am at a loss for anything I would like bacteria to make. Whenever I think "petrol" or "biofuel" I feel like a douche. I'm sick of burning carbon chains to get around.
What about an anti-autoimmune-virus-virus? Or some other synthetic mechanism that specifically targets known autoimmune viruses? (virii?)
Quote from: Phosphatidylserine on February 22, 2012, 05:45:25 PM
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 01:19:33 AM
Quote from: Cardinal Pizza Deliverance. on February 21, 2012, 01:06:29 AM
Quote from: ZL 'Kai' Burington, M.S. on February 21, 2012, 12:56:17 AM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 11:26:50 PM
Quote from: ZL 'Kai' Burington, M.S. on February 20, 2012, 11:24:04 PM
Quote from: Queen Gogira Pennyworth, BSW on February 20, 2012, 10:20:24 PM
Kai, is your brain available for technical advice for fiction?
Yes, but a disclaimer: I have research skills, a heavy background in biology, and a general interest in science, but I am not an expert at engineering, physics or mathematics. My background is generously in aquatic biology, taxonomy, and entomology, with a dollup of behavior and ecology.
DNA modeled nanobots too far outside your scope for comfort?
What exactly do you want to know? I mean, Craig Venter and his crew has been creating synthetic organisms, so a virus-like DNA carrier which can perform a simple task isn't so complicated. The issue is that virii can't do anything themselves, they need a host. Bacteria, on the other hand, can self replicate and perform tasks. People are working on synthetic organisms that will digest various materials, or produce various materials, for example.
Maybe they'll come up with one that eats plastic and produces something interesting.
I honestly am at a loss for anything I would like bacteria to make. Whenever I think "petrol" or "biofuel" I feel like a douche. I'm sick of burning carbon chains to get around.
What about an anti-autoimmune-virus-virus? Or some other synthetic mechanism that specifically targets known autoimmune viruses? (virii?)
Viruses is correct. You mean, synthetic anti-viral bacteria that can fight HIV? That would be awesome.
It seems like an appropriate use for them. I'm not sure if it's in the realm of trivial yet, though, or anywhere close.
Plus, if we screw it up we just added to the quickly growing field of biological weapons research and development.