2008-01-31

Octavio Paz: La otredad, el amor y la poesía

Por:Ociel Flores

Departamento de Letras | ITESM-CEM

La otredad es un sentimiento de extrañeza que asalta al hombre tarde o temprano, porque tarde o temprano toma, necesariamente, conciencia de su individualidad.

En algún momento cae en la cuenta de que vive separado de los demás; de que existe aquél que no es él; de que están los otros y de que hay algo más allá de lo que él percibe o imagina.

La otredad es la revelación de la pérdida de la unidad del ser del hombre, de la escisión primordial. Adán se descubre desnudo; habiendo perdido su inocencia, se ve a sí mismo y apenas se reconoce.

La otredad es para el hombre moderno un mal que se soporta con dolor: la conciencia moderna no acepta que su individualidad sea una realidad plural y que detrás del hombre que piensa se esconda otro que mantiene una vida "ilógica", que sostiene a menudo lo que la razón reprueba.

Octavio Paz sitúa el análisis del problema de la otredad en el centro de sus reflexiones y sugiere en algunos de sus textos capitales los medios mediante los cuales el hombre, especialmente su contemporáneo, puede enfrentar esta fuente de angustia y resolver los conflictos que trae consigo mediante el diálogo y mediante dos realizaciones de éste: la poesía y el amor.

El mismo Octavio Paz narra en Itinerario, libro en el que hace un recuento de los sucesos significativos de su vida, la ocasión en que tomó conciencia de este fenómeno. Sucedió en su infancia: al sentirse abandonado por los suyos, aislado del mundo incomprensible de los adultos, el niño reconoció su soledad y se oyó llorando en medio de la indiferencia de los otros. Sus gritos resonaron en su interior y tuvo por primera vez conciencia de que alguien lo escuchaba: "Él es el único que oye su llanto. Se ha extraviado en un mundo que es, a un tiempo, familiar y remoto, intimo e indiferente (...) oirse llorar en medio de la sordera universal." A partir de ese momento, agrega Paz, el individuo se separa del mundo y se dice "ya lo sabes, eres carencia y búsqueda".(1)

La otredad del individuo se manifiesta como el deseo de encontrar lo perdido, como el frustrado intento del andrógino de Platón que se abraza a la mitad que Zeus, en su cólera, le arrancara para siempre. La otredad empuja a los seres humanos a buscar al complemento del que fueron separados. Así, el hombre se une a la mujer, su otra mitad, la única que lo completa y que, al devolverle la perfección que la voluntad divina alteró, le permite el regreso a la unidad, a la reconciliación.

Ahora bien, esta revelación le aparece no sólo al individuo, sino también a una colectividad. Junto al hombre, encontramos al grupo de hombres que se identifican como una unidad sólida, distinta, que es y que vive de un modo particular, que condena aquello que los demás defienden, que cree otra cosa. Yo y los otros: nosotros y los otros. La otredad es, pues, un problema que concierne al hombre aislado y a la colectividad.

Mircea Eliade señala una constante en las sociedades que toman conciencia de su identidad: para cada una de ellas existe siempre una diferencia clara entre el territorio propio, entre el mundo conocido y el espacio indeterminado que lo rodea

"... el primero es el Mundo (es decir, "nuestro mundo"), el Cosmos. El segundo es otro mundo, uno extraño, caótico, poblado de larvas, de demonios, de extranjeros (de extraños)..."(2) En estas sociedades que se separan al pericibir los rasgos que las identifican, la otredad se presenta como un sentimiento de extrañeza frente a aquello que no es asimilable a lo conocido; de ello resulta un rechazo fundado en el miedo a lo ajeno.

Resulta curioso que sea durante la Ilustración, la era de la razón, cuando se haya concebido ella idea de encontrar (o de crear) al hombre universal; al humano capaz de ir más allá de las diferencias que en apariencia lo separan de sus semejantes.

La misma razón que concibió este proyecto generoso impidió que el hombre enfrentara su "parte obscura" y que tendiera puentes entre manifestaciones sensibles y las elucubraciones lógicas de su yo; de este modo, al no resolver el conflicto en sí mismo, el hombre se rehusó la oportunidad de reconciliarse con los demás. En nuestros días, afirma Paz, parece haber sido comprendida la necesidad de aceptar la pluralidad de las razas ylas cultural como una condición para lograr una convivencia armoniosa en el mundo: "... en el siglo XX hemos descubierto al hombre plural, distinto en cada parte. La universalidad para nosotros no debe ser el diálogo de la razón sino el diálogo de los hombres y las culturas. Universalidad significa pluralidad". (3)

Al hablar de la distancia que separa al uno del otro, debemos considerar el paso inicial. Reconocer la existencia de mi semejante, de la presencia que me permite tomar conciencia de mi individualidad; ver de frente al extraño a partir del cual me descubro y en oposición al cual mi ser se delimita es un acto que exige ante todo generosidad. Cathérine Chalier, al comentar las ideas de E. Levinas en torno a la relación del hombre con su semejante, denomina este momento en el que manifestamos nuestra voluntad de aceptar, nuestra disposición para reconocer el instante, de la asimetría ética; éste se funda, nos dice,"...en la certeza de que mi inquietud por el otro no depende de ninguna manera de su preocupación por mí. Si fuera así, correría el riesgo de que mi inquietud nunca tomara la forma de un gesto o un acto hacia el otro. De ser así el uno y el otro permanecerían a la expectativa, en una espera estéril". (4)

Existe por lo tanto la posibilidad de anular la distancia que nos separa de aquél o de aquello que no siendo identificado a lo propio se convierte en una fuente de angustia. La condición previa es la aceptación de la imperfección original del hombre y en seguida su apertura hacia lo otro, hacia la unión con el prójimo que lo complementa de manera natural pues siempre ha estado en él. Borges , en un poema dedicado a un desaparecido explica cómo en nosotros viven nuestros semejantes:

Inscripción en cualquier sepulcro

"Ciegamente reclama duración el alma arbitraria
cuando la tiene asegurada en vidas ajenas,
cuando tú mismo eres el espejo y la réplica
de quienes no alcanzaron tu tiempo
y otros serán (y son) tu inmortalidad en la tierra." (5)

Y Octavio Paz, en "El prisionero", poema de Libertad bajo palabra explica la manera en que se realiza este fenómeno de complementariedad:

"El hombre está habitado por silencio y vacío.
¿Cómo saciar esta hambre,
cómo acallar este silencio y poblar su vacío?
¿Cómo escapar a mi imagen?
Sólo en mi semejnate me trasciendo,
Sólo su sangre da fe de otra existencia" (6)

La solución es entonces el diálogo, entendido en su sentido más amplio: comunicación de los cuerpos y de las almas, gracias a la conversión amor o el diálogo transparente de la poesía.

Cuando leemos los textos que Paz consagra a este problema y a su resolución mediante el amor percibimos fácilmente el carácter místico de la unión amorosa y de la reconciliación de la pareja. El amor a primera vista es una forma de revelación instantánea de "la extrañeza y la semejanza del universo", puesto que "la mujer es la criatura única, la manifestación de la analogía universal".

Las menciones frecuentes que hace Paz de Novalis, por una parte, y de André Breton, por otra, al decir que la mujer es "el alimento privilegiado del hombre" y que "la mujer única", la del amor sublime, es "una ventana al absoluto" subrayan la importancia que Paz le concede al amor como respuesta al enigma de la pluralidad del ser consciente y al de la brevedad de la existencia.

En el caso de la poesía, Octavio Paz no hizo otra cosa que continuar la exploración iniciada por algunos de los mayores poetas del Occidente, en particular de los románticos y los malditos. Gérard de Nerval, por ejemplo, percibió el desdoblamiento de su ser, lo que le llevó a decir que él era "el otro" y Lautréamont decidió que si la poesía no es producto de una voz única debía ser abiertamente hecha por todos.

No es extraño que la otredad sea un punto de reflexión frecuente en los poetas; tal pareciera que la poesía sea el medio ideal para aprehender la especificidad de este fenómeno. Las imágenes lo pintan mejor que los conceptos. Ya Paz nos había prevenido: "La otredad no puede ser explicada si no es por analogía". "Cada poeta y cada lector es una conciencia solitaria: la analogía es el espejo en el que se reflejan". De este modo, la poesía propicia un diálogo auténtico entre el uno y el otro.

Al analizar los poemas de Luis Cernuda, Paz explica este hecho:

"El instante de la lectura es un ahora en el cual, como en un espejo,
el diálogo entre el poeta y su visitante imaginario se desdobla en el
del lector copn el poeta. El lector se ve en Cernuda que se ve en un
fantasma y cada uno busca en el personaje imaginario su propia
realidad, su verdad"(7)

A través del poema es posible, entonces, mirar los rostros de esos otros que viven en en nuestro interior y hablar con ellos. La primera ocasión para un poeta se presenta con la escritura de su poema : las líneas que traza sobre la página delinean poco a poco su propia imagen. El escritor ve en ellas sus temores, sus recelos, sus deseos, con frecuencia desconocidos aun para él, y de este modo descubre no la imagen que cree tener, sino su verdadero rostro; por ello, nos dice Paz, "...si tenemos la suerte de encontrarnos &emdash;señal de creación- descubriremos que somos un desconocido". (8) En Tiempo nublado, encontramos la descripción del instante en el que el otro aparece sobre la página:

"De pronto vi una sombra levantarse de la página escrita,
avanzar en dirección de la lámpara y extenderse sobre la cubierta
rojiza del diccionario. La sombra creció y se convirtió en una
figura que no sé si llamar humana o titánica. Tampoco podría
decir su tamaño: era diminuta y era inmensa, caminaba entre
mis libros y su sombra cubría el universo."(9)

Aun en el caso que el escritor no se reconozca a primera vista, no deja de percibir en ese retrato el lado oscuro de su ser: "...ese retrato fantástico es real, es el desconocido que camina a nuestro lado desde la infancia y del que no sabemos nada, salvo que es nuestra sombra (¿o nosotros la suya?)". (10)

Ahora bien, la contemplación de esa imagen exige valor para aceptar los dominios que la conciencia nos oculta. Paz lo dice así en LBP:

"...el examen de conciencia, el juez, la víctima, el testigo.

Tú eres esos tres. ¿A quién apelar ahora y con qué argucias
Destruir al que te acusa? Inútiles los memoriales, los ayes
Y los alegatos. Inútil tocar a pùertas condenadas. No hay puertas,
Hay espejos. Inútil cerrar los ojos o volver entre los hombres.

Esta lucidez ya no me abandona."(11)

El mayor beneficio que el poeta puede obtener de su actividad es la realización de un diálogo auténtico con el otro: los hombres, sus lectores, él mismo. Este diálogo necesita siempre un interlocutor que desee en verdad abrirse a la pluralidad. Paz lo expresa de esta manera. " El diálogo no es sino una de las formas, quizá la más alta, de la simpatía cósmica".

La creación de un poema no es otra cosa que la realización de este diálogo: entre el poeta y su yo, primeramente, entre el poeta &emdash;que está en el poema- y el lector, después.

Durante la primera parte de este proceso, el poeta debe escuchar las voces que le hablan de su interior: "siempre hay otro que colabora conmigo. Y en general colabora contradiciéndome. El peligro consiste en que la voz que niega lo que decimos sea tan fuerte que nos calle (...) la espontaneidad está alimentada por el diálogo". Una vez que este intercambio "espontáneo" ha dado lugar al poema, es necesario esperar la "simpatía" el lector para que la conversación pueda continuar.

En ocasiones el hombre se engaña y se inventa falsos rostros. Cuando finge o adopta una actitud que no es suya, suele caer en su propia trampa. A fuerza de simular nuevas imágenes, termina por perder la suya. Es natural, el hombre tiene una necesidad espontánea de forjarse nuevas apariencias; necesita transformarse según sus sueños, de "inventarse", y adopta así rostros que guarda temporalmente. La adopción de una nueva apariencia es un fenómeno complejo: por un momento sucede que la figura original y la que la disimula se vuelven una sola. Sucede que probemos una máscara que nos ofrece la imagen soñada y que con el tiempo se pegue a nuestra cara y que forme parte de nosotros: entonces, nos habremos convertido en lo que deseábamos.

Este proceso puede ser también instantáneo. Un hombre insatisfecho con la imagen que contempla todos los días en el espejo, puede decidir un día adoptar una nueva apariencia: se pone una máscara y se descubre tal como nunca se había visto. La transfiguración que se opera entonces, no significa que se hubiera disfrazado, sino que encontró su verdadera cara. El rostro que tenía antes no era realmente el suyo. Al transformar la figura que él creía conocer, descubre que poseía otra apariencia, más auténtica.

El descubrimiento del verdadero rostro puede producirse mediante una operación en sentido inverso. Llega un momento en el que la máscara que hemos llevado durante nuestra vida &emdash;a menudo obligados por nuestro entorno- nos ahogue o nos devuelva una imagen que no nos satisface más. Entonces la arrancamos y vivimos como siempre hemos deseado.

El despojo de las falsas imágenes que conservamos a menudo a pesar de nosotros, puede realizarse de manera inconsciente. Es frecuente que el rostro de alguien se ilumine o bien que ría como un niño. Inconscientemente, los hombres regresan al instante en el que ellos y su apariencia formaron un todo sólido: al momento en el que el hombre vivió en armonía con el universo, como en la infancia. En el poema "Nuevo rostro", la noche borra las trazas que el tiempo había grabado sobre el rostro de la mujer amada. Sus sueños la transportan de nuevo a su infancia, antes de la toma de conciencia de su otredad:

"Entre las sombras que te anegan
otro rostro amanece.

Y siento que a mi lado
No eres tú la que duerme,

Sino la niña aquella que fuiste
Y que esperaba sólo que durmieras

Para volver y conocerme." (12)

Si la imagen del hombre es cambiante, la única certeza que guardamos de sus inevitables mutaciones es su constante interrogación sobre él y sus otros yo. Su apariencia desconoce el estatismo; la reconciliación en la que se resuelve su pluralidad es momentánea.

De esta manera, es posible concluir que la unidad no existe. Si el hombre es tiempo (como habría afirmado Heidegger), su vida es un movimiento constante, un transcurrir ininterrumpido: es él y es otro. La otredad sería, por lo tanto, la forma en la que la unidad se despliega, siempre la misma, siempre diferente. Los otros que nos habitan no son estables; el hombre cambia y con él sus interlocutores. El hombre no es nunca completamente, es siempre una inminencia de ser. Por ello, está obligado a salir de sí mismo para recuperar su imagen. Por ello, afirma Paz, "... el hombre, siempre inacabado, solo se completa cuando sale de sí y se inventa."(13) Por lo tanto, "sólo seremos nosotros mismos si somos capaces de ser otro", pues "nuestra vida es nuestra y de los otros" (14).

Notas Bibliográficas

(1)Paz, Octavio. Itinerario. México. FCE. 1993, p.36

(2) Eliade, Mircea. Le sacré et le profane. Paris. Gallimard. 1989, p. 32

(3) Paz, Octavio. Hombres en su siglo. México. Seix Barral. 1990, p. 77

(4) Chalier, Cathérine. Levinas, l'utopie de l'humain. Paris. Albin Michel. 1993, p. 100

(5) Borges, Jorge Luis. Obras completas. Buenos Aires. Emecé. 1974, p.35

(6) Paz, Octavio. Libertad bajo palabra. México. Tezontle. 1949, p. 19

(7) Paz, Octavio. Cuadrivio. México. J. Mortiz. 1991, p.249

(8) Ibid, p.146

(9) Paz, Octavio. Tiempo Nublado: 164

(10) Ibid, p.165).

(11) Paz, Octavio. Libertad bajo palabra. México. Tezontle. 1949, p. 252

(12) Ibid, p. 255

(13) Paz, Octavio. Cuadrivio. México. J. Mortiz. 1991, p.90

(14) Paz, Octavio. Itinerario. México. FCE. 1993, p.239



Tomado de la revista Razón y Palabra.

2008-01-30

Dip once or dip twice?


OUR annual national snacking binge is almost here. It would take a very large bowl indeed to hold all the guacamole mashed from the more than 100 million avocados that are consumed on Super Bowl Sunday. (My rough calculation gives a hemisphere bowl 20 yards in diameter and 3 times the height of the goal post crossbars.) And guacamole is just one of many dips that will be shared around the TV.

Just in time, a scientific report has some new findings that may cause football fans to take a second look at that communal bowl of dip.

The study, to be published later this year in the Journal of Food Safety, is the only one I’ve ever seen to proclaim that it was inspired by an episode of “Seinfeld.” It was conducted as part of a Clemson University program designed to get undergraduate students involved in scientific research. Prof. Paul L. Dawson, a food microbiologist, proposed it after he saw a rerun of a 1993 “Seinfeld” show in which George Costanza is confronted at a funeral reception by Timmy, his girlfriend’s brother, after dipping the same chip twice.

“Did, did you just double dip that chip?” Timmy asks incredulously, later objecting, “That’s like putting your whole mouth right in the dip!” Finally George retorts, “You dip the way you want to dip, I’ll dip the way I want to dip,” and aims another used chip at the bowl. Timmy tries to take it away, and the scene ends as they wrestle for it.

Peter Mehlman, a veteran “Seinfeld” writer, wrote the episode. “At the time I was living in Los Angeles, in Venice,” he told me. “There was a party on one of the canals, and apparently someone dipped twice with the same chip. And a woman flipped out. ‘You just dipped twice! How could you do that? Now all your germs are in there!’ I thought, this is just too good not to use on the show.”

Timmy’s line appears to have been the first notable use of “double dip” to mean dipping a chip twice. George has to ask Timmy what it means. Mr. Mehlman said he thought that it was an obvious name for the offense.

At the party, he had sympathized with the double dipper. “We get exposed to germs in a thousand different ways,” he said. “Besides, I thought the dip was enough to kill anything. It was probably one of those ’60s-style dips with artificial dried onion soup.”

Professor Dawson told me that he had expected to find little or no microbial transfer from mouth to chip to dip, which would support George’s nonchalance. The results surprised him.

The team of nine students instructed volunteers to take a bite of a wheat cracker and dip the cracker for three seconds into about a tablespoon of a test dip. They then repeated the process with new crackers, for a total of either three or six double dips per dip sample. The team then analyzed the remaining dip and counted the number of aerobic bacteria in it. They didn’t determine whether any of the bacteria were harmful, and didn’t count anaerobic bacteria, which are harder to culture, or viruses.

There were six test dips: sterile water with three different degrees of acidity, a commercial salsa, a cheese dip and chocolate syrup.

On average, the students found that three to six double dips transferred about 10,000 bacteria from the eater’s mouth to the remaining dip.

Each cracker picked up between one and two grams of dip. That means that sporadic double dipping in a cup of dip would transfer at least 50 to 100 bacteria from one mouth to another with every bite.

The kind of dip made a difference in a couple of ways. The more acidic water samples had somewhat fewer bacteria, and the numbers of bacteria declined with time. But the acidic salsa picked up higher initial numbers of bacteria than the cheese or chocolate, because it was runny. The thicker the dip, the more stuck to the chip, and so the fewer bacteria were left behind in the bowl.

Professor Dawson said that Timmy was essentially correct. “The way I would put it is, before you have some dip at a party, look around and ask yourself, would I be willing to kiss everyone here? Because you don’t know who might be double dipping, and those who do are sharing their saliva with you.”

Professor Dawson encourages his undergraduate teams to test popular conceptions about food safety in the laboratory. Last year he published a paper on the five-second rule, which states that food dropped on the floor can be safely eaten if you pick it up before you can count to five. The rule turned out to be false.

I asked Mr. Mehlman what he thought of Professor Dawson’s study on double dipping. “It’s pretty gratifying to know that 15 years later the show continues to exist on the cultural landscape,” he said. “But it reminds me of Jerry’s joke about the scientists who developed the seedless watermelon.”

That stand-up joke opened a “Seinfeld” episode in 1994: “These guys are going, ‘No, I’m focusing on melon. Oh sure thousands of people are dying needlessly. But this,’ ” and here Mr. Seinfeld made a spitting noise, “ ‘that’s gotta stop. You ever try to pick a wet one up off the floor? It’s almost impossible. I’m devoting my life to that.’ ”

As Mr. Mehlman implied, double dipping appears unlikely to be a major public-health threat. Professor Dawson and his team write that the actual risks of double dipping are “debatable” and depend on many unknowable factors.

But it’s good to be aware that sharing a bowl of dip can mean sharing more than we’d like. And happily, the obvious preventive measure requires no deprivation, just a newly focused snack category: one-dip chips, too small for two.

Tomado del New York Times.

2008-01-23

Two AI Pioneers. Two Bizarre Suicides. What Really Happened?


Tomado de WIRED MAGAZINE: ISSUE 16.02

On the morning of June 12, 1990, Chris McKinstry went looking for a gun. At 11 am, he walked into Nick's Sport Shop on a busy street in downtown Toronto and approached the saleswoman behind the counter. "I'll take a Winchester Defender," he said, referring to a 12-gauge shotgun in the display. She eyeballed the skinny 23-year-old and told him he'd need a certificate to buy it.

Two and a half hours later, McKinstry returned, claiming to have the required document. The clerk showed him the gun, and he handled the pistol grip admiringly. Then, as she returned it to its place, he grabbed another shotgun from the case, yanked a shell out of his pocket, and jammed it into the chamber.

"He's got a gun! He's got a gun!" a woman screamed, as she ran out the front door. The store emptied. He didn't try to stop anyone.

Soon McKinstry heard sirens. A police truck screeched up, and men in black boots and body armor took up positions around the shop.

The police caught glimpses of him through the store windows with the gun jammed under his chin. They tried to negotiate by phone. They brought in his girlfriend, with whom he'd just had a fight, to plead with him. They brought in a psychiatrist — McKinstry had a history of mental problems and had tried to institutionalize himself the day before. After five hours, McKinstry ripped the telephone from the wall and retreated into the basement, where he spent two hours listening to radio coverage of the standoff. Eventually, a reporter announced that the cops had decided on their next move:

Send in the robot.

McKinstry had stolen the gun because he wanted to end his own life, but now he was intrigued. He'd always been obsessed with robots and artificial intelligence. At 4, he had asked his mother to sew a sleeping bag for his toy robot so it wouldn't get cold. "Robots have feelings," he insisted. Despite growing up poor with a single mom, he had taught himself to code. At 12, he wrote a chess-playing program on his RadioShack TRS-80 Model 1.

As McKinstry cowered in the basement, he could hear the robot rumbling overhead, making what he called "Terminator" noises. It must be enormous, he thought, as it knocked over shelves. Then everything went eerily quiet. McKinstry saw a long white plume of smoke arc over the stairs. The robot had fired a tear gas canister, but it ricocheted off something and flew back the way it came. Another tear gas canister fired, and McKinstry watched it trace the same "perfectly incorrect trajectory." He realized the machine had no idea where he was hiding.

But the cops had had enough. They burst through the front door in gas masks, screaming, "Put the gun down!" McKinstry had been eager to die a few hours before, but now something in him obeyed. The gas burned his eyes and lungs as he climbed from the basement. At the top of the steps, he saw the robot through the haze. It looked like an "armored golf cart" with a tangle of cables and a lone camera eye mounted on top. It wasn't like the Terminator at all. It was a clunky remote-controlled toy. Dumb.

Three hundred miles away in a suburb of Montreal, Pushpinder Singh was preparing to devote his life to the study of smart machines. The high schooler built a robot that won him the top prize in a province-wide science contest. His creation had a small black frame with wheels, a makeshift circuit board, and a pincer claw. As the prodigy worked its controller, the robot rolled across the floor of his parents' comfortable home and picked up a small cup. The project landed Singh in the Montreal Gazette.

Push, as everyone called him, had also taught himself to code — first on a VIC-20, then by making computer games for an Amiga and an Apple IIe. His father, Mahender, a topographer and mapmaker who had studied advanced mathematics, encouraged the wüenderkind. Singh was brilliant, ambitious, and strong-willed. In ninth grade, he had created his own sound digitizer and taught it to play a song he was supposed to be practicing for his piano lessons. "I don't want to learn piano anymore, I want to learn this," he said.

Singh's lifelong friend Rajiv Rawat describes an idyllic geek childhood full of Legos, D&D, and Star Trek. One of his favorite films was 2001: A Space Odyssey — Singh was fascinated by the idea of HAL 9000, the artificial intelligence that thought and acted in ways its creators had not predicted.

To create the character of HAL, the makers of 2001 had consulted with the pioneering AI researcher Marvin Minsky. (In the novel, Arthur C. Clarke predicted that Minsky's research would lead to the creation of HAL.) Singh devoured Minsky's 1985 book, The Society of Mind. It presented the high schooler with a compelling metaphor: the notion of mind as essentially a complex community of unintelligent agents. "Each mental agent by itself can only do some simple thing that needs no mind or thought at all," Minsky wrote. "Yet when we join these agents in societies — in certain very special ways — this leads to true intelligence." Singh later said that it was Minsky who taught him to think about thinking.

In 1991, Singh went to MIT to study artificial intelligence with his idol and soon attracted notice for his passion and mental stamina. Word was that he had read every single one of the dauntingly complex books on the shelves in Minsky's office. A casual conversation with the smiling young researcher in the hallway or at a favorite restaurant like Kebab-N-Kurry could turn into an intense hour-long debate. As one fellow student put it, Singh had a way of "taking your idea and showing you what it looks like from about 50 miles up."

The field of AI research that Singh was joining had a history of bipolar behavior, swinging from wild overoptimism to despair. When 2001 came out in the late '60s, many believed that a thinking machine like HAL would exist well before the end of the 20th century, and researchers were flush with government grants. Within a few years, it had become apparent that these predictions were absurdly unrealistic, and the funding soon dried up.

In the mid-'90s, researchers could point to some modest successes, at least in narrow applications like optical character recognition. But Minsky refused to abandon the grand Promethean dream of re-creating the human mind. He dismissed Deep Blue, which beat chess grand-master Garry Kasparov in 1997, because it had such a limited mission. "We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack," Minsky is quoted as saying in a book called HAL's Legacy: 2001's Computer as Dream and Reality. "No one has tried to make a thinking machine and then teach it chess."

Singh quickly established himself as Minsky's protégé. In 1996, he wrote a widely read paper titled "Why AI Failed," which rejected a piecemeal approach to research: "To solve the hard problems in AI — natural language understanding, general vision, completely trustworthy speech and handwriting recognition — we need systems with commonsense knowledge and flexible ways to use it. The trouble is that building such systems amounts to 'solving AI.' This notion is difficult to accept, but it seems that we have no choice but to face it head on."

Singh's ambitious manifesto prompted an encouraging note from Bill Gates. "I think your observations about the AI field are correct," he wrote. "As you are writing papers about your progress, I would appreciate being sent copies."

While Singh was climbing the academic ladder at MIT, McKinstry was trying to put his life back together after spending two and a half months in jail. But the suicidal standoff had given him a new sense of purpose. He liked to think that the police robot had deliberately misfired its tear gas canisters in an effort to save him "Maybe robots do have feelings," he later mused. By 1992, McKinstry had enrolled at the University of Winnipeg and immersed himself in the study of artificial intelligence. While pursuing a degree in psychology, he began posting on AI newsgroups and became enamored with the writings of the late Alan Turing.

A cryptographer and mathematician, Turing famously proposed the Turing test — the proposition that a machine had achieved intelligence if it could carry on a conversation that was indistinguishable from human conversation. In late 1994, McKinstry coded his own chatbot with the goal of winning the $100,000 Loebner Prize for Artificial Intelligence, which used a variation of the Turing test.

After a few months, however, McKinstry abandoned the bot, insisting that the premise of the test was flawed. He developed an alternative yardstick for AI, which he called the Minimum Intelligent Signal Test. The idea was to limit human-computer dialog to questions that required yes/no answers. (Is Earth round? Is the sky blue?) If a machine could correctly answer as many questions as a human, then that machine was intelligent. "Intelligence didn't depend on the bandwidth of the communication channel; intelligence could be communicated with one bit!" he later wrote.

On July 5, 1996, McKinstry logged on to comp.ai to announce the "Internet Wide Effort to Create an Artificial Consciousness." He would amass a database of simple factual assertions from people across the Web. "I would store my model of the human mind in binary propositions," he said in a Slashdot Q&A in 2000. "A giant database of these propositions could be used to train a neural net to mimic a conscious, thinking, feeling human being!"

The idea wasn't new. Doug Lenat, a former Stanford researcher, had been feeding information into a database called Cyc (pronounced "psych") since 1984. "We're now in a position to specify the steps required to bring a HAL-like being into existence," Lenat wrote in 1997. Step one was to "prime the pump with the millions of everyday terms, concepts, facts, and rules of thumb that comprise human consensus reality — that is, common sense." But the process of adding data to Cyc was laborious and costly, requiring a special programming language and trained data-entry workers.

Cyc was a decent start, McKinstry thought, but why not just get volunteers to input all that commonsense data in plain English? The statements could then be translated into a machine-readable format at some later date. But McKinstry's grand vision to harness the collective power of the Internet community to create an artificial intelligence had one serious flaw: The Internet community thought he was nuts.

McKinstry had been posting for years, detailing his research, his theories, and his personal life. He was known in newsgroups primarily for his outlandish rants and tall tales. He claimed to have been a millionaire at age 17. He detailed his police standoff and his experiences dropping acid ("I wandered downtown Toronto thinking and acting as if I was god").

In December 1996, snarky geeks created a newsgroup in his honor, alt.mckinstry.pencil-dick, taking as its charter "Discussion of Usenet kook McKinstry, aka 'McChimp.'" Leading the brigade was Jorn Barger, who would later run the site Robot Wisdom (and coin the term weblog). "You write like a teenager, and have shown frequent signs of extreme cluelessness," Barger emailed McKinstry in May 1995.

McKinstry never shied away from a flame war. "I'm just sick of you spouting your highly uninformed opinion all over the net," he replied to Barger. He threatened legal action against people who, in an effort to refute his theories, quoted directly from his emails. To those who made fun of his frequent misspellings, he explained that they were caused by dyslexia, not dementia.

But some of McKinstry's improbable boasts turned out to be true. Many scoffed when he claimed to have moved to Chile to work on the world's largest telescope, but he soon provided evidence that he was indeed an operator of the Very Large Telescope at the European Southern Observatory. "It's funny how often I get called a liar," he once posted. "I will no longer tolerate slander."

The eccentric researcher made friends among the bohemians and hackers of Santiago. "Chris could make people laugh and wasn't afraid to make a fool of himself in the process," recalls his ex-wife. And there was one important person who McKinstry said treated him with respect: Marvin Minsky. McKinstry claimed to have emailed Minsky in the mid-'90s, asking if it were possible "to train a neural network into something resembling human using a database of binary propositions."

"Yes, it is possible," Minsky is supposed to have replied, "but the training corpus would have to be enormous."

That was apparently all the encouragement McKinstry needed. "The moment I finished reading that email," he later recalled, "I knew I would spend the rest of my life building and validating the most enormous corpus I could."

On July 6, 2000, McKinstry retooled his pitch for a collaborative AI database. He had a business model this time, one that seemed well suited to the heady days of the dotcom boom. His Generic Artificial Consciousness, or GAC (pronounced "Jack"), would cull true/false propositions from people online. For each submission, participants would be awarded 20 shares in McKinstry's company, the Mindpixel Digital Mind Modeling Project.

Mindpixel was a term McKinstry invented to describe the individual user-submitted propositions. Pixels, short for "picture elements," are the tiny, simple components that combine to create a digital image. McKinstry saw mindpixels as mental agents that could be combined to create a society of mind. Gather enough of them — roughly a billion, he estimated — and the mindpixels would combine to create a functioning digital brain.

The criticisms and flames never let up. But McKinstry's clever stock offer managed to generate mainstream press coverage and hundreds of thousands of mindpixel submissions. He posted regular messages to his "shareholders" and talked up the enormous potential value that the Mindpixel project could have if it achieved its lofty goals. "It's like inventing teleportation," he told Wired News in September 2000. "How could you put a value on that?"

Do fish have hair? Can blue tits fly? Did Alan Turing theorize that machines could, one day, think? Did Quentin Tarantino direct Terminator 2? Is a neural network capable of learning? Is the Mindpixel project just a scam to make Chris McKinstry famous? — Questions submitted to the Mindpixel database.


Meanwhile, in Cambridge, Push Singh was pursuing a similar vision. He had teamed with Stanford researcher David Stork to create a database of commonsense knowledge through open submissions. On the surface it resembled Mindpixel, but instead of yes/no questions it compiled factual statements like "every person is younger than their mother" and "snow is cold and is made of millions of snowflakes."

In September 2000, two months after McKinstry launched Mindpixel, Singh posted a message on the rec.arts.books newsgroup to announce Open Mind Common Sense. "We have recently started a project here at MIT to try to build a computer with the basic intelligence of a person," it read. "This repository of knowledge will enable us to create more intelligent and sociable software, build human-like robots, and better understand the structure of our own minds. We invite you all to come visit our project web page, and teach our computer some of the things all us humans know about the world, but that no computer knows!"

But the Web community was dubious. The reason: McKinstry. "How is it any less moronic than Mindpixel?" Barger replied. Another poster agreed: "Should be obvious by now that these [AI] guys ... are the most successful con artists of our time."

"Mindpixel isn't moronic, it's courageous," Singh responded. "I disagree with how McKinstrey is doing it (as a company, giving out shares' that will never have any value, instead of making it public immediately). [And] the Mindpixel idea of training up a neural network' with the database is clearly ridiculous. I also believe our interface is better."

It didn't take McKinstry long to respond. "First, no e' in McKinstry," he fired back. "And second, your statement is misleading. The database is publically [sic] available right now, just not for commercial use." McKinstry bristled at Singh's dismissal of his stock plan. "Thems fightin' words!" And if Singh felt Mindpixel was "clearly ridiculous," perhaps he'd be willing to bet on whose database would be first to achieve intelligence? As for the dig at his site's interface, McKinstry conceded that Open Mind's was better, but blamed it on his lack of resources: "You didn't have to write it all by your lonesome."

McKinstry insisted that Mindpixel had one significant advantage over Open Mind: He required his contributor-shareholders to verify the accuracy of each other's submissions. "The net is a very open place," he wrote. "How do you keep garbage out without any form of validation mechanism? ... All you have to do is try to [imagine] Slashdot without the moderation system to see what's going to happen to your database."

McKinstry had been stung by Singh's criticisms. But the fact that Singh called Mindpixel "courageous" and was pursuing a similar project gave him a sense of validation. His response to Barger sounds triumphant. "How many years have you been fighting this idea of mine here in these news groups?" he crowed. "Now I guess the whole MIT Media Lab is crazy too?"

McKinstry sat at his computer uploading statements to Singh's Open Mind database. "Don Adams played Maxwell Smart. Trees don't have neurons. Jesus was a superstar. Marvin Minsky was alive in 2001. Houses don't eat pork." One thought led to the next in a revealing free association. "Push is normally a verb," he typed. "McKinstry is competing with Push."

Actually, McKinstry was hoping to forge a partnership with Singh. In 2000, he hinted to his shareholders that Mindpixel and Open Mind Common Sense were going to connect their databases. Singh initially did nothing to dispel this impression. "Chris is a good guy," he told Wired News.

McKinstry's mind turned often to Singh. They had so much in common: Two young researchers obsessed with simulating common sense. Both Canadian. Both Net-savvy.

Like McKinstry, Singh was convinced that the potential of artificial intelligence was enormous. "I believe that AI will succeed where philosophy failed," he had written on his MIT homepage. "It will provide us with the ideas we need to understand, once and for all, what emotions are." According to Bo Morgan, a fellow student at MIT, Singh suggested that giving common sense to computers would solve all the world's problems.

"Even starvation in Africa?" Morgan asked.

Singh paused. "Yeah, I think so."

But Singh's ambitions were modest and grounded compared with McKinstry's. The man behind Mindpixel was certain that his database would become a thinking machine in the near future. The father of a son from his brief marriage in the '90s, he sometimes referred to GAC as his second child. He believed that he would be recognized as one of the great scientific minds in history. "He thought he deserved a Nobel Prize," says a friend who blogs under the handle Alphabet Soup. "He compared himself to Einstein and Turing. He said GAC would make him immortal."

McKinstry meant that part about immortality literally. "The only difference between you and me is the same as the difference between any two MP3s — bits," he wrote in an Amazon.com review of How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. (He gave the book three stars.) McKinstry often told friends that he intended to upload his consciousness into a machine: He would never die.

Do teenagers think they know everything? Is MIT the best tech school in the world? Did HAL 9000 ... go nuts and try to kill everyone? Does me got bad grammar? Does Wired magazine mostly write about different types of wire? Is death inevitable? — Questions submitted to the Mindpixel database

McKinstry's hopes for a partnership with the MIT project were soon dashed. "McKinstry was fundamentally different than us," Singh's collaborator, Stork, recalled. "We thought people wouldn't participate in the project if they were making some guy in Chile rich."

McKinstry didn't let it go. On July 16, 2002, he tried to reconnect with Singh, emailing him a link to a paper on language models. It suggested a way that statements submitted to Open Mind and Mindpixel could be understood by machines. "This is what I've been babbling inarticulately about all these years. It just needs to be trained on a corpus of validated propositions," he wrote. The paper's author was Canadian. "Another coincidence," McKinstry noted.

Four days later, Singh sent an unenthusiastic reply. "Current statistical approaches are still too weak to learn complex things," he wrote. "We need some really new ideas in machine learning that go beyond what people are doing today. It helps to have the large datasets like mindpixel or openmind, but we're still missing the right learning component."

Open Mind, which would eventually garner more than 700,000 submissions in five-plus years, was now part of a Commonsense Computing division at the MIT Media Lab. Singh was pursuing another research project for his PhD. He was also coauthoring papers with Minsky and presenting his ideas at conferences and symposia around the world.

Privately, McKinstry began speaking of his resentment of Open Mind. Singh's project, he felt, had gotten all the attention simply because it was affiliated with MIT. He complained that Singh had copied his statistical model for collecting data and claimed that he had contacted a dean at MIT asking that Singh's work be taken down. (There is no evidence to support this allegation.)

Mindpixel would eventually receive roughly 1.5 million submissions, but McKinstry's lack of business skills had become apparent. He had lined up no commercial partners or applications and apparently had no intention of honoring any of the promises he'd made to his "shareholders." All he had was an enormous collection of questions ranging from "Does Britney Spears know a lot about semiconductor physics?" to "Is McKinstry a media whore with no real credentials or expertise?"

McKinstry, who said he was diagnosed as bipolar, went into decline. A fight with his latest girlfriend led to a few nights in a Chilean mental hospital. His mood was briefly buoyed when an article he'd written, entitled "Mind as Space," was chosen to run in a 2003 anthology that would feature contributions from many of the luminaries in the AI field. But as the publication of the book was repeatedly postponed, he grew more frustrated and despairing. He started wondering about his old rival again.

On January 12, 2006, McKinstry hit Singh's personal blog. "It has been hard to give this blog any attention while finishing my dissertation," Singh had written some six months earlier. "I am now Dr. Singh!" Singh also wrote about "some new ideas [Minsky] has been developing about how minds grow. The basic idea is called interior grounding,' and it is about how minds might develop certain simple ideas before they begin building articulate connections to the outside world."

New ideas? McKinstry commented on Singh's blog that it sounded similar to a 1993 paper in the journal Cognition, and he provided a link to the PDF. On his own blog, he wrote, "The idea reminded me strongly of some neural network experiments that I replicated in 1997." Singh never replied.

"So what exacty does a web suicide note look like?" McKinstry wrote on January 20, 2006, a week after he posted to Singh's blog. "Exctly like this."

He was sitting in a café near his home in Santiago, pounding the keys on his Mac laptop. He posted the message on his blog and a slightly different version on a forum at Joel on Software, a popular geek hangout.

McKinstry's rant was florid and melodramatic. "This Luis Vuitton, Parada, Mont Blanc commercial universe is not for me," he wrote. He talked about his history of suicidal feelings and botched attempts, and he insisted that this time things would be different. "I am certain I will not survive the afternoon," he wrote. "I have already taken enough drugs that my alreadt weakened liver will shut down very soon and I am off to find a place to hide and die."

The online forum members were understandably skeptical. McChimp was flinging bananas again, they figured. "Have a nice trip! Let us know if there's anything beyond the 7th dimension!" read the first comment. "Typical of his forum," McKinstry replied. "I am having more trouble than usual typing due to the drugs. I have to go die not. bye." Then, "It is too late. I will leave this cafe soon and curl up somewhere." A few minutes later: "I am leaving now. People are strating to notice I canot type and I am about to vomit. Take to go. Last post." Later still: "I am leave now. Permanently."

"I don't buy this for a minute," replied a familiar detractor named Mark Warner. It was enough to pull McKinstry back into the fray for one last flame war. "Warner, you were alway an ass," he replied. "I have to go vomit now and take more pills." His final post continued the theme: "I am feeling really impaired. And yes, time will tell what happens to me. I really have to get out of here. I cannot type. and want to vomit. Time to go hide."

Three days later, on January 23, after calls from panicked friends, the police checked McKinstry's apartment and found his body. He had unhooked the gas line from his stove and connected it to a bag sealed around his head. He was dead at age 38.

McKinstry's few friends say he occasionally spoke of suicide, but no one knew why he had gone through with it this time. Carlos Gaona, a younger hacker who had become his protégé, raced over to the apartment and convinced McKinstry's girlfriend to give him his laptop, his journal, the dog-eared books. And, of course, the Web was full of his thoughts, rants, dreams, and nightmares. He never got to upload his consciousness into a thinking machine, but in a sense he had been uploading himself his entire adult life. Before he died, he had replaced the home page of chrismckinstry.com with the words "Catch you later."

One blogger wondered, "If not for his belief in the permanence of the internet, that his suicidal proclamation would remain on the World Wide Web for posterity — would Chris McKinstry be alive today?"

Others wondered how this would affect the idea of collaborative AI databases. On January 28, Bob Mottram, who had once been offered the unpaid position of chief software developer at Mindpixel, wrote in a post memorializing McKinstry: "For the present, the last man standing in this game is Push Singh."

After completing his dissertation, Singh was offered a job as professor at the MIT Media Lab. He would be teaching alongside his mentor, Minsky, who credited him with helping to develop many of the ideas in his new book, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. He would have the resources to pursue his dream of "solving AI." Before assuming his position, though, he decided to take time off, as he told a friend, "to think."

Everything in Singh's life seemed to be going well. He was enjoying a relationship with a girlfriend who worked at the lab. The IEEE Intelligent Systems Advisory Board, a consortium of top AI figures around the world, had selected him as one of the top 10 researchers representing the future of the field.

But privately, Singh was suffering. He had severely injured his back while moving furniture, and though he did his best to stay engaged on campus, colleagues noticed that he was distracted. He told a friend, Eyal Amir, that there were times when he was incapable of doing anything because of the excruciating pain. Some thought it was clinical depression. Colleague Dustin Smith asked, "How much of your attention is on the pain at a given moment?"

Singh replied, "More than half."

In The Emotion Machine, Minsky suggests that chronic pain is a kind of "programming bug." He writes that "the cascades that we call Suffering' must have evolved from earlier schemes that helped us to limit our injuries — by providing the goal of escaping from pain. Evolution never had any sense of how a species might evolve next — so it did not anticipate how pain might disrupt our future high-level abilities. We came to evolve a design that protects our bodies but ruins our minds."

Four weeks after Chris McKinstry committed suicide, the police were dispatched to an apartment at 1010 Massachusetts Avenue near MIT. Inside, they found the 33-year-old Singh. He had connected a hose from a tank of helium gas to a bag taped around his head. He was dead.

Mahender Singh still has the robot that his son created in high school. "He thought that computers should think as you and I think," he says. "He thought it would change the world. I was so proud of him, and now I don't know what to do without him. His mother cries every day."

"If anyone was the future of the Media Lab, it was Push," wrote the director of the lab, Frank Moss, in a mass email on March 4, 2007. A memorial wiki page was set up, and friends and colleagues posted dozens of testimonials as well as pictures of the young researcher. "His loss is indescribable," Minsky wrote. "We could communicate so much and so quickly in so very few words, as though we were parts of a single mind."

Singh's childhood friend Rawat, with whom he had watched 2001 as kids in the '80s, posted too. "This might sound corny," he wrote, "but I felt at the funeral that they should play Amazing Grace' [as in] Spock's death scene in Star Trek II, where Kirk eulogized him as being the most human' being he had ever met in his travels." It would have been appropriate to Push, he said, "who was at once intellectually curious and logical (or as he put it, sensible) and deeply human."

Privately, Rawat cites a different movie. "Sometimes I think this totally ridiculous thought," he says, "that he was bumped off like the end of Terminator 2." He refers to the fate of the character Dr. Miles Dyson, who creates a neural network processor that eventually achieves sentience and turns against mankind. When a cyborg from the future warns of what's to come, an attempt is made to kill Dyson before he can complete his work. Ultimately, the scientist nobly sacrifices himself while destroying his research to prevent the machines from taking over the world. "That's a fantasy [Push] would have gotten a kick out of," Rawat says.

Amid the grieving, there were whispers about the striking parallels between Singh's and McKinstry's lives and deaths. Some wondered whether there could have been a suicide pact or, at the very least, copycat behavior. Tim Chklovski, a collaborator with Singh on Open Mind, suggests that perhaps McKinstry's suicide had inspired Singh. "It's possible that he gave Push some bad ideas," he says. (The rumors are likely to begin again: The fact that Singh committed suicide in nearly the same way McKinstry did has not been reported or widely known until this writing.)

Details have not been forthcoming from MIT. After initial reports in the media of an "apparent suicide" by Singh, a shroud of secrecy descended. Minsky and others in the department declined to be interviewed for this article. The school has long been skittish about the topic of suicide. MIT has attracted headlines for its high suicide rate in the past, and the family of a 19-year-old student who set herself on fire sued the school in 2002. A week after Singh's suicide, a columnist in the student paper urged school officials "to take a more public and active role in acknowledging and addressing the problem of mental health at the Institute." Singh's bio page and personal blog remain online, but shortly after Wired began making inquiries, MIT took down the tribute wiki.

Many say the greatest tragedy is that neither young man lived long enough to see his work bear fruit. Recently, the Honda Research Institute in Mountain View, California, began using Open Mind data to imbue its robots with common sense. "There is a nice resurgence of interest in commonsense knowledge," Amir says. "It's sad that Push didn't live to see it."

After McKinstry's long struggle for academic legitimacy and recognition, his "Mind as Space" article will finally appear in the book Parsing the Turing Test, whose publication was delayed from mid-2003 to this February. "McKinstry himself was a troubled soul who had mixed luck professionally," the book's coeditor, Robert Epstein, says. "But this particular concept is as good as many others."

In his acknowledgments, McKinstry credits Marvin Minsky for his "encouragement of my heretical ideas"; his colleagues at the European Southern Observatory's Paranal facility, "who tolerated my near insanity as I wrote this article"; and "of course the nearly fifty thousand people that have worked so hard to build the Mindpixel Corpus."

McKinstry and Singh were both cremated. Singh's sister scattered his ashes in the Atlantic, not far from MIT. McKinstry's remains are said to be under his son's bed in the UK. Meanwhile, someone is posting to newsgroups under McKinstry's name. "I have always been and will always be," one message read. "I am forever."

Contributing editor David Kushner (david@davidkushner.com) wrote about the Linkin Park cyberstalker in issue 15.06.

2008-01-22

ACHTUNG!


ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS!

DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKSEN.

IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS.

ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN.


2008-01-17

Sabes que eres un comunicólogo cuando...

Esto me llegó por correo, tristes realidades de la falta de identidad y confusión ideológica de los comunicólogos:

TOMADO DEL BLOG DE PABLO G.P. (http://pablogui.spaces.live.com/) tras escribir en el google, la pregunta ya mencionada en el Titulo… me encantoooo.
Qué es ser comunicólogo (o estudiante de..)

Para todos aquellos que preguntan de qué se trata ser comunicólogo (o estudiante de…), el amigo Kirilovsky manda un textillo (un tanto alterado por el autor de este blog) con el cual puede tenerse una respuesta…
• Un comunicólogo eligió su carrera porque no sabia que estudiar (jamás dijo de niño: “mamá, cuando sea grande quiero ser comunicólogo “)
• Un comunicólogo no sabe lo que es realmente ser un comunicólogo.
• Un comunicólogo detesta que le pregunten ¿estudias periodismo ?
• Un comunicólogo, cuando lee o escucha por ahí el nombre de algún autor, piensa: “yo lo vi en la facu, pero no recuerdo lo que dice ”
• Un comunicólogo pasa por tres paradigmas: ingresante -postmoderno -graduado.
• Un comunicólogo no sabe como explicarle al mundo exterior lo que realmente estudia.
• Un comunicólogo no sabe si esta haciendo actividades prácticas o comunicación visual gráfica 1
• Una vez recibido un comunicólogo tiene 2 opciones: laborar en una ONG o en radio Universidad (y también se complica).
…Y acá algunos parámetros para entenderlos
Un comunicólogo no está entre una cosa u otra, ESTA EN UNA DICOTOMÍA.
Un comunicólogo no mira televisión, CONSUME INDUSTRIA CULTURAL.
Un comunicólogo no decide algo por descarte, DECIDE POR DESCARTES.
Un comunicólogo no tiene amigos con plata, TIENE AMIGOS BURGUESES.
Un comunicólogo no está en silencio, MANTIENE EL SONIDO EN 0 DB.
Un comunicólogo no habla, EMITE UN MENSAJE.
Un comunicólogo no escucha, DECODIFICA.
Un comunicólogo no lee, INTERPRETA UN DISCURSO.
Un comunicólogo no piensa, MOVILIZA LA CONCIENCIA.
Un comunicólogo no lee textos, DISCUTE CON AUTORES.
Un comunicólogo no conversa, INTERACTUA.
Un comunicólogono trabaja en una radio, TRABAJA EN UN AIRE.
Un comunicólogo no pregunta, CUESTIONA
Un comunicólogo no compra cosas, ALIMENTA EL CAPITALISMO.
Un comunicólogo no dice una cosa y después lo contrario, HABLA EN TERMINOS DIALECTICOS.
Un comunicólogo no se pregunta por el presente, SINO POR LAS CONDICIONES REALES DE EXISTENCIA.
Un comunicólogo no “chapa”, TRANSGREDE LAS FRONTERAS CORPORALES.
En fin, no somos diferentes, sólo usamos un léxico contra-hegemónico.

2008-01-07

Por eso no me caen bien los chinolas...