Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - P3nT4gR4m

Pages: 1 2 3 [4] 5 6 7 ... 18
Quantum mindfuck alert!  :eek:

Quote could solve certain problems that could not be solved by any non-quantum computer, even if the entire mass and energy of the universe was at its disposal and molded into the best possible computer.

This is not a proper computer but, in much the same way a GPU isn't a proper computer, it's mindfuckingly fast at a specific narrow subset of computation

First the caveat (the text in white letters on the graph). D-Wave has not built a general-purpose quantum computer. Think of it as an application-specific processor, tuned to perform one task — solving discrete optimization problems. This happens to map to many real world applications, from finance to molecular modeling to machine learning, but it is not going to change our current personal computing tasks. In the near term, assume it will apply to scientific supercomputing tasks and commercial optimization tasks where a heuristic may suffice today, and perhaps it will be lurking in the shadows of an Internet giant’s data center improving image recognition and other forms of near-AI magic. In most cases, the quantum computer would be an accelerating coprocessor to a classical compute cluster.


The team found that in U6, the Prp24 protein and RNA—like two partners holding hands—are intimately linked together in a type of molecular symbiosis. The structure yields clues about the relationship and the relative ages of RNA and proteins, once thought to be much wider apart on an evolutionary time scale.

"What's so cool is the degree of co-evolution of RNA and protein," Brow says. "It's obvious RNA and protein had to be pretty close friends already to evolve like this."

So I'd never thought much about the period in history when a puddle of amino acids turned into DNA. This article mentions that RNA evolved first and I've always had this vague gap in my head where - amino acids and then something something something and then BINGO- Nanofactories!

The thing with gaps in my head is, quite often, I don't notice them. The other thing is, once I become aware of them, some administrative function demands they are filled in with as much reliable information as is available. So I come to the good people of PD, cos I know that often, in matters such as these, it's often faster than Google.

If there an equivalent to this somewhere with chemicals on the left and Cells on the right?

This is not an argument. I'm not stating a position. This is a quarter baked thought that's begun materialising in my head and I'm writing it down to explore the development of a seed of an idea that I'm not too clear on yet, in the hope that highly intelligent random internet people might provide some input.

The reason I'm stating this, is because (knowing me) I'll forget to e-prime or whatever the fuck and say things like "It's like this" or "that's the reason", as my train of thought goes barrelling down the track and I want to get it straight from the git-go that I'm not assuming any of this bullshit, I'm merely examining it as an alternative to current models.

I will not defend it if challenged, because it's random crap that my brain is coming up with. I'm not even sure what the hell it is yet but it's piqued my interest so here goes...

I'm a software engineer. I'm a hacker. I work with software. My job is to communicate massively complex sets of instructions and conditional logic to machines which carry out these instructions on datasets and then communicate the results back to me or to the end user.

Something that I've always been vaguely aware of but never given much thought to, is what it must be like for the people (a significant percentage?) who don't actually understand exactly what software is, how it behaves, and what it represents as a manifest phenomena at the granular levels which all software engineers encounter and interact with it.

So in general software geek parlance these individuals are commonly termed "users" with a small -u- (we're an elitist bunch us code nerds) and these are the people who don't think of what they're doing as software, rather as the abstractions the designers expose them to. They're using a spreadsheet. They're sending an email. They're playing Angry Birds.

Then there's people who understand a fair bit more about the mechanics of the stuff we're dealing with but their knowledge rarely extends very far below the abstract interfaces of the users. They might kinda grok what a config file does or be able to apply some basic conditional logic into their XL formulae. We call them "Superusers", by way of differentiating.

Superusers get a capital letter at the start of their name and some of them, the chosen few, will be elevated to the exalted status of "Admin". Unlike the user and the Superuser, the admin is someone who we can leave in a room full of sharp objects and reasonably expect nothing too bad to happen as a result.

The admin may be someone who has absolute command of the system he administrates and yet, at the same time, from the engineers point of view, he's just another user. He may have a perfect understanding of the operations of the system, at the interface level but he's still dealing with a construct, an emergent property, an abstract representation of the software itself.

So what is this software? What is it made of? How does it work? In a sense I could describe it as a language, of syntax and semantics which we use to describe existent systems, existent materially or in abstract, in a way which effectively mimics or parallels the operations of the systems they describe.

That old cliché about language being "alive", referring allegorically to poetry and prose? No, I mean these "languages" are alive in the "Frankenstein" sense. The words and the sentences, speak themselves and they grow and sprout and branch in direct correlation to any existent system they represent and, as a result, fluency in these languages grants the hacker something that's akin to a new sense, a new way of interpreting external information - the sense of code.

A coder develops an intrinsic sense of code which, once developed, isn't something that you just turn off when you step away from the computer, any more than you'd turn your sense of sound off when you stopped listening to the radio. Everything a coder observes, using the traditional, biological senses, can also be observed and filtered via the sense of code.

If all existent phenomena can be considered as a system, then all existent phenomena can be expressed, examined and extrapolated in code. If this is the case, then it follows that we can examine the efficiency, the accuracy and the performance of these systems, simply by analysing the resulting code descriptions.

Software has advanced rapidly, over the last couple of decades, the systems that software represents, refined and shaped and moulded, by way of the software itself, which was quickly able to figure out increasingly efficient ways of carrying out the end goals which any given system was required to achieve.

However, not all systems have been described in code. Many human systems, especially in the interpersonal and up to the societal sphere of interaction still remain clunky and old and inefficient when observed with a sense of code. What a sense of code is, in the context of this framework is a method of describing, objectively, the system being observed. This objectivity is the kicker. By the same token that it could be argued that objectivity is one of the fundamental strengths of scientific investigation, so too does code benefit from having objectivity built in.

As I said earlier, code can be used to describe all systems, biological, neurological, sociological, psychological and, by declaring these systems in code and examining their operations through the lense of a well developed sense of code, maybe these systems can be analysed objectively, in a way that allows us to develop and evolve them through the iterative process of refinement which is a coder's stock in trade.

What I'm getting at here is not necessarily simulation, rather it's simply a description of a system of operations, described in a language that mirrors the operations of the system analogously. I find it difficult, at this stage, to further elaborate and communicate my idea in terms which a user, someone with little or no sense of code, would understand. How do you describe the colour blue to a blind person is the sound of rain to someone who's deaf? Perhaps there's a coder reading this now, who gets the drift of where I'm going and can think of a way to explain this? Maybe, like me, you've never given much thought to this "sense of code" idea but, now that you have it seems to hold some merit?

Techmology and Scientism / David Eagleman - What the actual fuck?
« on: May 31, 2014, 06:43:42 am »
This guy is just a crackpot, talking gobshite, right :eek:


The ENIAC computer was switched on in 1946 and thus began the information revolution. It had a CPU clock speek of 100 kHz (100,000 machine cycles per second) and a proto-ALU that could perform a (then) staggering 357 multiplications per second or 35 division operations with an operational uptime of around 50%.

The original form factor, weighed in at 30 tons, and stood an impressive 8 feet tall, 100 feet long and 3 feet deep. It set the US Army back an estimated six million bucks in today's money.


Ten years later IBM released the 305 RAMAC introducing the worlds first commercially available Hard Disk Storage System. It was considerably smaller than the ENIAC, 30 foot by 50 foot, the hardrive taking up a further 16 square foot for a capacity of around 5 Megs.

For all it's diminutive stature, it could out think the Eniac by a factor of about 10 and, more importantly, it could commit it's instructions and the results of it's calculations, to memory. It retailed for $100,000 or leased for $3500 a month. All in 1000 units shipped before it became obsolete in 1962


Ten years later HP entered the computer market with it's "go-anywhere, do-anything" 2116a. This tiny device could fit comfortably in a small office. Integrated chips had by now replaced the vacuum valves of previous generations and the latest and latest and greatest machines could now store tens of thousands of instructions and values in true 16 bit registers. Churning out calculations to in the order of tens of thousands per second.

The 2116a was one of these machines and you could have one delivered to your place of business for between $25,000 and $50,000, depending on options and upgrades.


Three years later the P3nT4gR4m MkI Biocomputational Device was released. None sold so far.


Seven years later Apple created the Apple I and sold it, in kit form, to home users. This was around the birth of personal computing but it's a bit unfair to compare the specs of what was essentially a whole new sub-revolution so lets stay with the business sphere for a little longer and give this fledgeling platform a little time to mature, shall we?

The pinnacle of business in '76 was the Cray I. It was a big bugger at 58 cubic feet and weighing in around 2.5 tons but by fuck it was fast. 166 million floating-point operations per second fast. One of these bad boys would set you back a cool $10 million but at least you had the option of more reasonably priced, albeit significantly less powerful alternatives


Ten years later IBM Released their first Laptop. At $2000 a pop, it came with a quarter of a megabyte of RAM and an 80c88 processor clocked at 4.7 MHz and portable at a smidegon under 6 kilos on the scales but I'm comparing apples and oranges again what else was happening?

Well, since 76 we were fast approaching the end of Mainframes, with minis and superminis Like the IBM 6150 arriving on the scene. Supporting multiple "dumb terminal" users, all vying for compute cycles at the timeslice trough. 16 Megabytes of RAM and a 300 Meg Hard Drive, these shrunk the average mainframe room into a form factor resembling a modern desktop PC on steroids. It performed a couple of hundred thousand floating point operations per second.


Ten years later Intel released their 6th generation Pentium. Remember the first picture I posted, the old B&W one with the room full of wires and flashy lights? Well the picture above is the equivalent of several million of those. Five and a half million transistors in all, squeezed down to a little square slab of silicon about the size of your fingernail. Instructions per second? 500-odd million.


Ten years later Intel Core 2 Extreme - 49,161 Million instructions carried out, to the letter, every second.

In other news, computation has spawned a new platform - the Smartphone which, contrary to the name, has very little to do with telephony and is in fact an ultra-portable mobile computing platform.



Okay so it's got a cute factor but it's a good few refinements away from operating on the nanoscale. Nice to see a giant mockup actually working, tho.  :fap:

Was just talking to a mate, at the weekend, about how solar was almost ready to destroy the oil economy. Come back home and find this. I'd seen a meme on facebook and thought it was some kind of photoshop wind up. Turns out team america might be on verge of actually saving the world this time. America! Fuck yeah!

Aneristic Illusions / This thing I got about IP...
« on: May 19, 2014, 07:28:18 pm »
Explained more succinctly than I ever would

It’s not my land. It’s ours. And no one is hunting… If anything, we’re farming, and all the cross-pollination going on helps everyone.

Aneristic Illusions / A vain attempt to end the madness
« on: May 17, 2014, 08:49:14 am »
Any rational human: Please don't do this inconceivably stupid thing you complete fucking retards?

Crazy bastards making the decisions: We need these for FREEDUMZ!!!

Aneristic Illusions / The Replicator thread
« on: May 14, 2014, 07:58:26 pm »
This disparity in ownership and the greed factor of massive stockpiling will become irrelevant in the next couple of decades. In case nobody has noticed yet we are on a roadmap to the invention of Star-Trek Replicatorstm. Anything that can be manufactured, sold and consumed will be manufactured and consumed on the spot. There's no middleman. There's no "sold".

A molecular printer spits out any consumable on command. The only time a human gets involved is ordering then using the output. No-one sources the components, no-one assembles it, no one delivers it, no one manages the process of coordinating supply to production to order fulfilment. Who get's paid? What costs money? How does one earn money? Who's going to want to collect money? What use is it to them? These are the kinds of questions that come with replication.

I'd say you're being pretty optimistic about the timescale involved though I'd be delighted if you were right. If you're talking about a full on post scarcity society, I can only think of a couple of theoretical models off-hand and both pretty much change it into a reputation/social standing tool.

The main problem before reaching this stage is getting everyone in the position of relative privilege to stop fucking over several nations and actually co-operate on a long term basis. Probably worth a new thread to discuss this further if you care to?

My timescale predictions are optimistic by the standards of anyone who thinks it's twice that or more, and pessimistic by those who think it'll be sooner. Truth is no one knows. I make a best guess based on shit going on now, where that came from and how long it took to get there.

So this, right now, is pre-replicators

Shit going on now is they're printing really fucking tiny and they're poking individual atoms around now and forming them into complex lattices and tubes. For me (as an interested layman) this level of fucking tiny skillz has been coming to my notice, increasingly over the last 10-15 years.

Computation has miniaturised and seems headed to graphene with recent breakthroughs in stamping tri-laminate sheets to provide transistor function. Before replicators we'll have nanoscale "smart plastics" only they won't be plastic, they'll be graphene(and stuff to follow it)-based polymers functioning as nanoscale robotic supercomputer clusters, forming physical arrangements on command, increasing in granular resolution as the tech matures and miniaturises.

So this covers all the solid, synthetic, electronic objects we rely on to live. Furniture, Kitchen equipment, clothing, transport, utility. It's all imbued with the ability to rearrange form and function on a verbal command. This shit is super-strong (hundreds of times stronger than steel, super-conductive (most conductive material on earth) and super-lightweight (laminate panels, 3 atoms thick). You won't need much of it to build assume the shape of your dwelling of choice. Get this - this shit requires carbon, yknow? That shit that everything is made of. Graphene has recycling built in. We can make the motherfucker out of our 20c garbage.

Biotechnology will do the same for organics, nutrients, medicine, printing custom flaura and fauna, downloaded from Wiki-Bioforms. Learning to code biology (once we suss it out) gives us access to an already existing self replicating nanotechnology which produces all the stuff you need to keep consciousness alive, that the nanomachine tech path cannot provide. This distinction itself becomes moot as bio borrows from nano and nano copies bio and the two disciplines merge and create a unified smart matter platform.

My prediction:

10 years would surprise me (but not too much).

40 years would seem feasible if there were no breakthroughs and shots from nowhere-leaps in tech over the next ten years.

20-30 years is not ridiculous - the part of 20-30 that is 20 is, by definition, not ridiculous and it sounds fucking awesome and it makes me happier than 30-40 and it's more realistic than 10.

Replicator? Pretty much but it's not a beam of light so y'know what I'm betting will happen? People will go "Yeah but all that that is, is them little na-ner machines making it look like it's a replicator.

But fuck those future assholes, my question is asked of the assholes right now - how long can we go on pretending that there's not enough shit to go round?

Think for Yourself, Schmuck! / The next revolution
« on: May 13, 2014, 05:37:04 pm »
The Agricultural Revolution - the thing was farming
Less people hunted
That's what most everybody did
The exchange was barter, then beads

The Industrial Revolution - the thing was industry
Less people did farming
Most everybody switched to factories
The exchange was gold, then promises of gold

The Information revolution - the thing was automation
Less people did much of anything
Most everybody is still struggling to keep up
The exchange was numbers
Then patterns painted in arrays of transistors
Vast beyond our ability to comprehend
Minuscule beyond the same

Most everyone is still doing shit
But it's getting easier
If it isn't for some then it fucking well ought
There's no technical reason why it fucking well shouldn't

This is the future
We've gone and fucking made it
Nobody noticed but the robots came
Sprouting from their clockwork anscestors
And they welded bits of metal and sold us carbonated soda
and they spat out the paper currency from before
for a while...
not so much nowadays

So what's the next revolution?

What's left for us to do?

When the machines are taking care of most everything that needs taken care of?

What's the exchange then?

Techmology and Scientism / Couple of quick maths questions
« on: May 12, 2014, 06:06:06 pm »
Prolly easy if you know sums.

Object A is travelling in a straight line, at a constant speed of 60 Mph toward object B.

Object B rapidly accelerates to 6 mph on a heading perpendicular to object A's trajectory.

How close can object A be to object B if object B wants to leave it til the last second before paddling like fuck into the bow wave of an approaching oiltanker

Hypothetically speaking?

part 2 is the exact same question but object A is travelling at 40mph and is only the size of a fishing boat

I'd appreciate if you could tell me how you worked it out. Now that I've formulated the question I'm kinda interested in how maths would be applied to solving it.

Techmology and Scientism / Google and Facebook invest in drones
« on: May 08, 2014, 04:22:48 pm »

Peter Diamandis gets on my tits sometimes, coming across as a wannabe messiah but he makes a lot of sense, too.

Global Connectivity: We are heading from a world of 2 billion connected to the internet (in 2010) to at least 5 billion by 2020. But this drone technology, perhaps in combination with Google’s stratospheric balloons (called Project Loon) has the potential to take it to 7 billion by 2020.

This is perhaps one reason why, in April 2013, Google Executive Chairman Eric Schmidt made the surprising pronouncement that, “by the end of the decade (2020), everyone on Earth will be connected to the Internet.” This addition of another 3 billion to 5 billion new consumers on Earth is HUGE. If these people are not your customers, then they are your customer’s customers. They represent tens of trillions of dollars of new economic buying power entering the global economy. Don’t ignore this. This is a huge opportunity.

Think for Yourself, Schmuck! / Ted talk spins morality on it's head
« on: May 02, 2014, 06:28:45 am »
Science can answer moral questions

It's from 2010 but it's the first time I've personally heard this idea.

Seem to make a lot of sense? Check.

Complete opposite of "conventional wisdom"? Check

Worth exploring? I reckon so.

Rebuttals on a postcard...

Pages: 1 2 3 [4] 5 6 7 ... 18