Corpania Ideas

CAVEAT! I'm an amateur philosopher and idea-generator. I am NOT an investment professional. Don't take any of my advice before consulting with an attorney and also a duly licensed authority on finance. Seriously, this my personal blog of random ideas only for entertainment purposes. Don't be an idiot.

Wednesday, January 28, 2015

Couple Quick Thoughts on AI (finally posting them)

Apparently everyone familiar with Ray Kurzweil's "Technological Singularity" concept of AI accepts the notions of that:

1) Limitless, ever-growing significant knowledge & advancing intelligence is possible and therefore inevitable
2) Such growth must accelerate beyond human control

Though both may be likely, I think neither is necessarily true.

1)-Response: There are limits on the speed of light and amount of information that can ever be communicated over circuits or fiber optics. To believe those limits must necessarily be exceed-able is tantamount to faith in magic. Consequently, such limits would necessarily put some upper limit on advancing technology or AI (even if that would be unfathomably more advanced than we can even conceive of today).

2)-Response: Chaos theorists love to posit that evolvable nanobots (like self-aware, ever-learning AI) would destroy the earth because aberrations would necessarily grow out-of-control. But I think it's legitimately possible that such aberrations could compete with each other and die out for myriad causes (just like countless bacteria in the real world). Not to mention the possibility that whatever "controls" we institute on such nanobots (metaphorically, our "white blood cells") would evolve as fast or faster as the aberrations, thus always beating the aberrations.

NOTE: I don't claim these ideas to necessarily be original (haven't done enough research to know all that's out there). Please advise me if you've seen them before so I can give proper credit and/or correct myself.

Blog Archive