Budapest Workshop on Philosophy of Technology 2017

Budapest Chain Bridge

Date: 1-2 December 2017

Venue: 1132 Budapest, Victor Hugo 18-22, Room 5002.

The workshop is free of any charges. If you want to be in the audience please register here (there are 25 seats available for the audience).

The call for papers is now closed and archived here.

Program of the Budapest Workshop on Philosophy of Technology 2017 (BudPT17)



Friday, 1st December, 12:00- Registration Open
Time slot (local time) ID Presenter (s) Affiliation Title
         
12:30-13:00 - Mihály Héder
Budapest University of Technology and Economics / MTA Sztaki, Hungary
Welcome to BudPT2017!
         
  Tacit knowledge and/or Michael Polanyi  
13:00-13:30 A1 Kiyotaka Naoe (invited) Tohoku University, Japan Artifacts and Tacit Knowledge
13:30-14:00 A2 Mihály Héder
Budapest University of Technology and Economics / MTA Sztaki, Hungary
Michael Polanyi and the Epistemology of Engineering
14:00-14:30 A3 Phil Mullins (invited) Missouri Western State University, USA Michael Polanyi on Machines as Comprehensive Entities
         
14:30-14:45 Coffee & Snack Break
         
  Body and Art  
14:45-15:15 B1 Jacopo G. Bodini Université Jean Moulin Lyon 3, France The screen: a body without organs
15:15-15:45 B2 Egor Efremov Russian State University for the Humanities Models of creative man-machine relationships in technological arts
15:45-16:15 B3 Anna Caterina Dalmasso Université Saint Louis – Bruxelles, Belgium Techno-Aesthetics and the Technics of the Body
         
16:15-16:30 Coffee & Snack Break  
         
 
Modularity, Caravaggio and the Geiger Counter
 
16:30-17:00 C1 Mathieu Charbonneau Central European University, Hungary Modularity and Recombination in Technological Evolution
17:00-17:30 C2 Alexandra Karakas
Budapest University of Technology and Economics, Hungary
Did mirrors determine Caravaggio?
17:30-18:00 C3 Mark Thomas Young University of Bergen, Norway When Making Never Ends: The Geiger Counter and Early Cosmic Ray Research
         
  Presenter's dinner
         
Saturday, 2nd December 9:00-  
  Borgmann, Smart Environments, Crises and Expert Systems 
9:00-9:30 D1 Balázs Horváth
Budapest University of Technology and Economics, Hungary
Nudging in Hyperreality:A Philosophical Study of Technological Choice Architectures
9:30-10:00 D2 Shane Ryan and Orestis Palermos
Nazabarayev University, Kazakhstan and Cardiff University, UK
Smart Environments
10:00-10:30 D3 Dániel Gergő Pintér and Peter Neuman
Budapest University of Technology and Economics / MTA Sztaki, Hungary
10:30-11:00 D4 Ákos Gyarmathy
Budapest University of Technology and Economics, Hungary
The problem of undermined evidence
         
11:00-11:30 Long break with finger food
         
  Technological Knowledge, the Nominative Case and Feenberg
11:30-12:00 E1 Maxim Mirkin and Hakob Barseghyan University of Toronto, Canada The Role of Technological Knowledge in Scientific Change
12:00-12:30 E2 Agostino Cera University of Basilicata, Italy Towards a Philosophy of Technology in the Nominative Case
12:30-13:00 E3 László Ropolyi Eötvös Loránd University, Hungary Technology as an aspect of human praxis
13:00-13:30 E4 Eduardo Beira (invited) University of Minho, Portugal Technology, modernity and democracy: critical theory and agency
         
13:30-13:45 - Mihály Héder
Budapest University of Technology and Economics, Hungary
Closing remarks

Chair of the Program Committee

Dr. Mihály Héder

Associate professor
Department of Philosophy and History of Science
Budapest University of Technology and Economics

Member of the Society for Philosophy and Technology

Affiliated Institutions

Budapest University of Technology and Economics and

Institute for Computer Science and Control of the
Hungarian Academy of Sciences

ELTE Institute of Business Economics

Abstracts of presentations at BudPT17


Artifacts and Tacit Knowledge

Kiyotaka Naoe

Engineering activity is said to embody tacit knowledge, which gives reliability and smoothness to the operation of a system, enables the relationship between humans and machines (e.g., endoscopic operation), and works as a foundation for knowledge sharing and knowledge creation in a group. Although almost all engineering knowledge is tacit or rooted in non-propositional knowledge, tacit knowledge is usually viewed as being personal, as suggested even by its first advocate, Michael Polanyi. In this paper, however, I focus on the collectiveness of tacit knowledge in engineering processes.

Collins contrasts what he calls collective tacit knowledge (CTK) with somatic tacit knowledge (STK), saying that, whereas the latter is personal and reducible to the explicit, the former is embedded in society from the beginning and unique to humans. Departing from the Collins’s notion of social Cartesianism, which regards the body as purely causal, I highlight the phenomenal body in Merleau–Ponty’s sense. By presenting an example of chalk factory in which more than 70% of the employees have intellectual disabilities, some of them severe, and for whom step-by-step communication is necessary instead of abstract verbal instruction, I examine the flexible structure of CTK with detailed analyses of incorporeal relationships and participatory sense-making (PSM). Engineering activity is not executed merely in obedience to instructions or rules but has its origin in mutual sense-making by multiple people. On the foundation of mutual tuning-in relationships, such processes take place by paying joint attention to the same artifacts and situations, thematizing some things while placing others in the background, and interpreting those things as such and such with the mutual motivation to effect some action. Taking PSM into account, CTK is proven to be knowledge that originates in society or, more precisely, in incorporeal relationships and hovers between tacit and explicit knowledge.

From such thinking, moreover, another significant point follows: namely, the two-sidedness of tacit knowledge. On the one hand, PSM has a tendency to be stabilized; it is routinized to “taken-for-granted” knowledge by repetition (A. Schutz 1970). On the other hand, as a problem-solving activity, PSM has the flexibility to address unintended or unplanned situations (R. Crease 1994). Thus, CTK involves a delicate balance between those two poles. My discussion reveals the practical as well as theoretical significance of CTK. CTK not only confers meanings to artifacts but also provides an explanation of, for example, the organizational structure of “ironies of automatization” (Bainbridge 1987) and the “normalization of deviance” (Vaughn 1997).


Michael Polanyi and the Epistemology of Engineering

Mihály Héder

In his main monograph, Michael Polanyi promotes a new philosophy, the “fiduciary program,” which is meant to tackle problems facing humanity. At its core, there is a new epistemology called Personal Knowledge, which is also the title of the book. This includes a comprehensive description of the epistemology of engineering as a distinct mode of knowing with its own characteristics, along with Polanyi’s other two categories, “natural” and “exact” sciences.

In this presentation, Polanyi’s engineering epistemology is reconstructed and evaluated. Polanyi states that all knowledge is either tacit or rooted in the tacit, and also explains how it originated from inarticulate animal knowledge. The knowledge of engineers is evolutionally rooted in what Polanyi calls Type A learning, which involves a heuristic act of contrivance. For animals, this is essential for discovering means-ends-relationships.

For human engineers the situation is not at all different. They harbor a particular kind of intellectual passion, the heuristic passion for discovering novel and economic ways for achieving goals.

What they discover are certain kinds of rules of rightness: operational principles of machines. This concept is part of Polanyi’s hierarchical ontology. According to Polanyi, our world is made from only one substance, but there are multiple levels of existence. Some things—living organizations and machines— are more real than everything else, because they are not merely material: they are emergent. In Polanyi’s view, there is nothing transcendent about these entities, as they are part of nature. As such, they should be accounted for by science just like any other phenomena. And, in fact, science does this, but it does not reflect this fact because it tends to employ a faulty methodology.

Emergent entities come into existence—or emerge—from matter. This is possible because the laws of matter leave room for higher level laws—rules of rightness—to operate. In the context of machines, these higher-level laws are called operational principles. The correctly implemented machine can operate flawlessly as long as the material conditions do not deteriorate outside limits.

The knowledge of the engineer is about these rules of rightness. From this, it follows that discovery in engineering means finding new operational principles. But the rules of rightness cannot account for faulty behavior. Failures always have material causes; therefore, the engineering profession entails a good grasp of material sciences. I will argue that this approach can be the conceptual basis for basic engineering research that is in contrast with applied science, the category engineering usually falls into.


Michael Polanyi on Machines as Comprehensive Entities

Phil Mullins

In his middle and late philosophical thought, Michael Polanyi develops his ideas about what he calls a “comprehensive entity.” His comments about machines often focus on machines as one kind of comprehensive entity. These reflections review Polanyi’s conclusions about comprehensive entities and discuss his account of machines as such entities. I raise some questions about Polanyi’s account such as whether his ideas should be extended by viewing machines from a niche-construction perspective which emphasizes the “emergence” of machines as an analog of the emergence of comprehensive biotic entities which Polanyi outlines.

Comprehensive entity” is an epistemic term whose meaning develops from Polanyi’s primary ideas about “comprehension” as an active, skillful, unformalizable integrative act of an embedded, responsively indwelling center aiming at unspecifiable achievements evaluated by self-set standards. The term “comprehensive entity” is sparingly used in Personal Knowledge (PK) and refers primarily to living beings. Recognition of living beings is a molar achievement bound up with evaluation in terms of what Polanyi calls “rules of rightness” and causes of failure manifest in living performances (PK, 328) However, soon after PK, Polanyi broadens the term “comprehensive entity” to refer simply to any object of focal attention, living or not. Such an object of attention is a whole that includes or comprehends subsidiaries. Polanyi also expanded, in the 1960s, what he had to say about comprehensive entities and this is part of his philosophical development moving from his emphasis upon the “fiduciary programme” (PK, 264) to the theory of “tacit knowing” (The Tacit Dimension, xviii). Growing out of his PK notions about “molar” recognition (3327), Polanyi carefully argues for the “ontological” aspect of tacit knowing (The Tacit Dimension, 13).  However, Polanyi’s discussion of ontology is not in the modern nominalist philosophical mainstream in which an ontological separation of that inside the mind and that outside is a presupposed starting point for philosophical inquiry.  Polanyi argues for a particular kind of participative realism, one with polyvalent and bodily aspects, and his account of ontology is part of this.

Much of Polanyi’s discussion of machines as one kind of comprehensive entity is bound up with his discussion of living comprehensive entities, and this begins in Chapter 11 of PK, “The Logic of Achievement”, which opens with an account of the “logic of contriving” (PK, 328).  Polanyi later revised and refined some of what he outlines in PK.  He argues that comprehensive biotic entities and comprehensive mechanical entities are both dual-controlled systems. A “higher” level of control, operating in margins left open by principles operating at a “lower” level of control, gives further shape to an entity; the higher level “restricts nature in order to harness its working” (as a “machine-type boundary”—“Life’s Irreducible Structure”[“LIS”], Knowing and Being, 226) . Polanyi holds living comprehensive entities can be complex, dual-controlled hierarchical entities; what he repeatedly draws attention to is the “higher-lower” relationship of adjacent levels and he generalizes this with suggestions about a stratified universe.

Comprehensive biotic entities are niche-embedded centers with tacit powers used for achievement and such entities evolve in evolutionary history in conjunction with a dynamic environment. Polanyi interestingly suggests in a late essay that a new higher control level can emerge by degrees in embedded biotic systems (“LIS”, 231). But he seems to think about machines only as shaped by human beings who design them for specific purposes.  Are matters this simple?  Very complex contemporary machines that “learn” are certainly designed by humans, but can their purposes shift or grow?  Is it possible that a higher level of control in a complex contemporary machine can emerge by degrees based on the machine’s interactions in an environment? Machines, of course, seem to have evolved in human history, but is there something which might analogically be termed “emergence” in the domain of machines? Polanyi thinks dual control implies a joining together to two levels of reality and some comprehensive entities are “more real” than others. Are complex contemporary machines “more real” than the reality found in cobblestones or less complex machines?  Should we today think about machines somewhat more as we now think about the evolution of animals involved in niche construction?  Machines seem to be designed and constructed by human beings and they shape the bio-cultural human niche which in turn reshapes human beings who in turn, perhaps with the help of machines,  re-shape other machines.


The Screen: A Body Without Organs

Jacopo Bodini

In an episode of the Netflix series Black Mirror, a device applied to the temple of the brain allows old people to temporary explore a parallel dimension as tourists by using their old, sick bodies as screens that project them into young, functioning bodies. The place where they go, San Junipero, is a virtual world: a massive database in the cloud to which people, instead of dying, can upload their consciousness. The difference between the tourists and the residents of San Junipero is that the tourists are alive in their human bodies and travel to that dimension by coupling their bodies with a wearable device, whereas the residents have given up their human bodies and copied and pasted, so to speak, their consciousness and memories to external hardware.

The dream, or the risk, of the dematerialization of the body, imagined by the fiction of Black Mirror as by many others, is to be understood from the more general perspective of an ideology of transparency that responds to a desire for immediacy and exhibition, for which new technologies are a way to directly display and dispose of the world and of the body itself. In fact, screens have nowadays become interfaces through which we encounter the others and the world: a real world as well as a virtual one. In that role, screens increasingly work as bodily prostheses that enhance and extend human perception, knowledge, desire, and action. Indeed, the coupling of the body and screens has never been as evident as in recent decades, largely due to the diffusion of wearable devices and augmented reality technology. As has been pointed out by several studies, such a massive exteriorization of human capacities into technologies could therefore risk the dematerialization of bodily experience and even engender a progressive insensibilization of our perceptive, cognitive, and relational functions.

However, the actual use of wearable screens more often shows the reversal of that ideological design. In fact, wearable screens often operate by using bodily organs as prostheses. If bodily organs function as prostheses of screens, then those screens can be conceived as what Deleuze and Guattari call a “body without organs”: an instable, residual surface crossed by fluxes of desires, images, and data that disorganizes the hierarchy and transparency of the biotechnological organism. Thinking of the screen as a body without organs enables us to think of the screen within the libidinal body and its opacity, which invites the conceptualization of new forms of identities, desires, and memories that such disorganization provides.


Models of creative man-machine relationships in technological arts

Models of creative man-machine relationships in technological arts

Egor Efremov

As the complexity of technological objects rises, so does the probability of unexpected, unintended, seemingly independent behavior of the machine. Such erratic behavior presents a threat in critical and precise fields, such as warfare and medicine, but it can be perceived aesthetically as an interesting and desirable effect. At the turn of the XXI century new creative strategies and cultural forms emerge as a form of modern technopaignia – such as glitch, circuit bending, clicks & cuts and a variety of technologically specific or format-based genres ranging from video art and demoscene to gif art and excel art.

Ontological status of non-human agents in technological arts becomes a crucial ideological and aesthetic question for practitioners of the field, especially due to the controversial problem of authorship attribution and representation of the role of the machine (technology/algorithm/device) in the creative processes. Similar creative practices can emerge independently in different cultural environments, connected solely by possibility of misuse or reinterpretation of a given technology. This clear agency of the machine inspires a search of Author’s corpse in the intertwined networks of developers, technologies, superusers (prosumers exploiting a given technology), users and audience.

I categorize possible models of creative man-machine relationships and explore the representations of ontological statuses of the machine in technological arts. Theoretical foundation of this categorization comes from cross-analysis of classical works by Gilbert Simondon, Martin Heidegger, Friedrich Kittler, as well as more recent texts by Donna Haraway, Michael Kurtov and Nick Monfort. Models are illustrated with cases from two distinctive fields: generative literature and electronic music. I identify five interpenetrative models, in which machine is represented as:

  1. Instrumentum vocale. Classic “extension of man”, extension of author’s will (e.g. programmer of poetry generator “writes text that writes text”).
  2. “Man in a robot costume”. Machine is a metaphor of a human, represented as having unrealistic human traits, such as individuality, emotions, inspiration, etc. Machines in this model can be perceived as the extensions of human nature (a man might be perceived as a mechanism, too).
  3. The Symbiont, a part of a hybrid or assemblage with human wetware.
  4. The Great Other — though the machine is not an individual, it has intentions that human might be unable to comprehend.
  5. The Environment (the variation of aforementioned). Machine as a media, environment, matrix, network. It becomes much more than the developer conceived and is a self-contained source of tacit knowledge about the limits of its use. Artists and audience may discover Benjaminian aura by the practice of dérive through the event horizon of possible results of the algorithm.

Techno-Aesthetics and Technics of the Body: From Merleau–Ponty to Simondon and Back

Anna Caterina Dalmasso

In his account of technics, Leroi–Gourhan makes no essential distinction between the tool as a technical organ and the organ as a bodily element. A technical object —a biface, for example—emerges from the sensible matter in the same way as the hand insofar as they both are a “secretion of the body and the brain” (Leroi–Gourhan 1964, 132) and entail a “technique of the body” (Mauss 1936). In fact, technological tools and devices should never be considered in isolation, because they exist only in relation to the interminglings between bodies and society that they make possible or that make them possible.

Thus, technicity, understood in its broadest sense as exteriorization, cannot be thought of as something that is merely added to a so-called “natural” core of embodied life (Hansen 2000, 2006; Shaw 2008) but in its mutual implication with sensibility (Ihde 1979, 2001, 2010), that is, in its relationship with the development and historical evolution of the living body understood––in its inseparable connection with the mind–– as the junction between the sensible and the symbolic, the organic and the cultural, and perception and expression.

In this paper, I investigate the reciprocal implications of embodied aesthetic thinking and technical thinking in order to show how technicity, as a cultural and symbolic attitude, is rooted in the aesthetic dimension of human experience, understood not only as the relationship to artistic creation but more radically as the human body’s ability to aesthetically engage with the world. In a complementary way, I examine the sensible genesis of the living body’s technicity and address the decisive question of how technics can inflect and catalyze changes in the human sensorium, thinking, and intersubjective relationships.

My contribution articulates these questions in the wake of Merleau–Ponty’s phenomenology of the body, especially with regard to the connection between the living body’s motricity and symbolism (Merleau–Ponty 1942, 1945, 1964, 2013), and on the basis of Simondon’s groundbreaking reflection on technics, particularly his conception of techno-aesthetics—that is, a primitive form of our contact with the world or of technics in its functional aspects (Simondon 1958, 2014)—to develop a cross-reading of the theoretical account of the body and technics made by the two philosophers.


Modularity and Recombination in Technological Evolution

Mathieu Charbonneau

Cultural evolutionists typically emphasize the informational aspect of social transmission, i.e., the learning, stabilizing, and transformation of mental representations along cultural lineages. Social transmission also depends on the production of public displays such as utterances, behaviors, and artifacts from which social learners learn. However, the generative processes involved in the production of public displays are usually abstracted away in both theoretical assessments and formal models.

The aim of this paper is to complement the informational view with a generative dimension that emphasizes how the production of public displays both enable and constrain the production of modular cultural recipes through the process of innovation by recombination. In order to avoid a circular understanding of cultural recombination and cultural modularity, we need to seriously consider the nature and structure of the generative processes involved in the maintenance of cultural traditions.

The paper offers a preliminary analysis of what recombination and modularity encompass and shows how the study of recombination and modularity depends on a finer understanding of the generative processes involved in the production phase of social transmission.

Ultimately, the paper argues that the recombination process depends on the inventive production of an interface between modules and the complex recipes in which they figure, as well as the interfaces are the direct result of the generative processes involved in the production of those recipes.


Did mirrors determine Caravaggio?

Alexandra Karakas

The basic argument shared by all types of technological deterministic theories is that technology is the governing force that shapes and leads society in a certain direction. Usually, these kinds of arguments are applied in the field of social theory, technological discourses, and philosophy, to name but a few, but they have not been discussed profoundly in terms of visual culture, and in art history in particular. Indeed, there is always within every artwork a tool that has been used, which on the one hand, irrevocably formed the piece, and on the other part of it one in some way; whether it be a printer that produced a picture, or a video camera that recorded a happening, technological devices play a dominant role in artworks, as well. Although these examples seem a bit trivial, there are other cases where the character of particular technological devices have not yet been clarified.

In my paper, I state that what David Hockney discovered, namely that artists like Caravaggio used mirrors and lenses when creating paintings, is a form of technological determinism. This means that the improvement in terms of verisimilitude in painting did happen because of a device, and not because of some sort of magical progress in the human mind. Special lens and mirrors not only helped painters to create such masterpieces, but at the same time deformed the paintings in a way that was not obvious at all. These changes can be spotted if one looks at small details rather than a whole painting.

The aim of the paper is twofold, to point out that because of technological devices, certain explanations of artistic expressions are no longer prevalent; and to reveal within one example retrospectively that technological determinist theories can reveal yet hidden layers of art history.


When Making Never Ends: The Geiger Counter and Early Cosmic Ray Research

Mark Thomas Young

This presentation seeks to problematize the dichotomy between production and use in
the history and philosophy of technology through an examination of the early history of the Geiger counter. Traditional histories of the Geiger counter often imply that the production of the device ended in 1928, when its designers, Hans Geiger and Walter Müller, managed to resolve problems of interference affecting the device. However, interviews with early researchers on cosmic rays who used the Geiger counter throughout the 1930s reveal that the device continued to present researchers with a series of challenges that could be met only by innovative, productive activity. My goal in this presentation is to show how difficulties faced by historians in delimiting the production of the Geiger counter provide us with an opportunity to question some common assumptions about the nature of the practice of making itself.

The first section of this presentation aims to demonstrate how the distinction between making and using stems from a formal conception of technological production, one which demarcates production from use in the evolution of an artifact by positing a point at which a designer’s intentions are realized materially. By illustrating the ways in which the early history of the Geiger counter subverts this model, I question the extent to which the distinction between making and using provides us with adequate conceptual resources to explain processes of technological development.

In response to this concern, the second section seeks to outline and explore an alternative conception of technological practice by drawing on the phenomenological framework for material culture outlined in the work of anthropologist Timothy Ingold. By illustrating how the function of technical artifacts is best understood to emerge after production, this account challenges the hierarchy between making and using that has hitherto provided the dominant framework for the history and philosophy of technology and calls for new forms of scholarship attentive to the creative dimensions of practice.


Nudging in Hyperreality

A Philosophical Study of Technological Choice Architectures

Balázs Horváth

This paper aims to revisit the ideas of German-born US philosopher of technology Albert Borgmann about new realities created by technology and machinery in light of how specific hyperreal environments can function as choice architectures for human agents.

The terms choice architect and choice architecture were coined by psychologist of judgment and decision making Daniel Kahneman and behavioral economist Richard H. Thaler. Together with John P. Balz, they wrote, “Decision makers do not make choices in a vacuum. They make them in an environment where many features, noticed and unnoticed, can influence their decisions. The person who creates that environment is, in our terminology, a choice architect.” That idea can be easily connected to Churchill’s principle quoted by Borgmann in his book Real American Ethics, “We shape our buildings, and afterwards our buildings shape us.”

Various technological advancements and the information revolution in the 21st century have borne out Borgmann’s concerns about reality from the 1990s, as the research program of heuristics and biases in human decision making have continued to grow. Meanwhile, Daniel Kahneman’s work was awarded with the Nobel Memorial Prize in Economic Sciences in 2002.

This study explores the philosophical issues of decision making that arise when the choice architecture is based on technology (e.g., GPS navigation and self-driving cars in transportation) and whether a Borgmannian hyperreal environment enables nudging—an aspect in the choice architecture that guides individual or group decisions ethically—or, by contrast, a technological choice architecture decreases the human agent’s freedom of choice.


Smart Environments

Orestis Palermos and Shane Ryan

As contemporary epistemology becomes increasingly perceptive of social dimensions of knowledge, the number of prominent adherents to the emerging field of social epistemology grows. Broadly construed, social epistemology is the systematic study of how phenomena in social environments can affect the epistemic standings of individual agents and possibly of collective agents as well. But does the idea of social environment exhaust all possible aspects of an environment that can affect one’s epistemic standing? What if one’s epistemic environment is broader than one’s social environment? After all, our natural environment is indeed broader than our social environment. If that relative standing is also true of epistemic environments, then what are its theoretical and practical ramifications for epistemology?

Epistemologists have repeatedly commented on the negative role that one’s natural environment can play in the case of skeptical scenarios. However, other than the possibly negative epistemic effect of one’s environment, there has been no systematic theorization of the possible roles, especially the positive ones, that one’s environment may play in one’s epistemic standing. That oversight constitutes a significant methodological asymmetry with the case of social epistemology, in which there is a growing tendency toward a systematic theorization of the relationship between social environments—or at least aspects of social environments—and epistemic standing.

In this paper, we elaborate on the idea of epistemic environmentalism by drawing upon certain significant insights from the externalist philosophy of mind and cognitive science in order to demonstrate some of the possible epistemic effects that one’s environment may have on one’s epistemic standing. Although our work may well be classified as a form of social epistemology, it is much more than that. As we explain, just as one’s social environment can significantly affect one’s epistemic standing, one’s physical environment can also significantly affect one’s epistemic standing. Accordingly, both of those aspects of an epistemic agent’s surrounding world can be manipulated to bring about positive and negative epistemic effects, which constitutes a realization that can have far-reaching applications for how we think about, as well as design, future epistemic environments in ways that that far exceed the scope and agenda of current social epistemology.

Apart from stressing the epistemic effects of manipulating not only one’s social environment but also one’s physical environment, epistemic environmentalism also differs from social epistemology in that it is explicitly a form of applied social epistemology. Similar to standard environmentalism, epistemic environmentalism aims at exhibiting an activist attitude insofar as it seeks to not only discuss what the future of our epistemic environments will be. Moreover, on the basis of philosophical engineering, epistemic environmentalism seeks to actively participate in the future designs of our epistemic environments. Consequently, the job of the epistemic environmentalist is to engineer smart environments: to use any available epistemological insights and knowledge of individual and social natures and existing technologies, whether analog or digital, to mold our physical and social surroundings in ways that can enhance the quality and performance of epistemic practices and services, reduce epistemic pollutants and epistemic resource consumption, and allow epistemic agents to informationally engage more effectively and actively with each other.

Following that framework of epistemic environmentalism and clarifying, on the basis of externalist philosophy of mind and cognitive science, the possible ways in which one’s physical and social environment can affect one’s cognitive and thereby epistemic standing, the paper proposes several designs for future epistemic environments. Although the epistemological agenda advanced in the paper represents a radical departure for epistemology, it is an appropriate and unavoidable departure for the future of contemporary epistemology. Future epistemologists should not remain passive observers or analysts of informational architectures to come. On the contrary, given the arrival of new forms of knowledge and technology (e.g., the social web) and expanding understandings of implicit cognitive biases and our dynamic interdependence with the surrounding world, the ground is now fertile for epistemology to show the world its worth.


Philosophical Aspects of Information and Communication Technology-Based Crises

An Application of Borgmann’s Approach to Understand, Predict, and Manage Emergencies in the New Millennium

Péter Neuman and Dániel Gergő Pintér

Technological development in general is intimately related to the different individual and organizational crises that occur in societies. The question of whether the effects of such development are limited to helping the achievement of some predetermined goals has persisted for centuries in a wide range of fields, including the philosophy of science, the philosophy of technology, and the history of technology. The emergence of trains and automobiles made transport much faster and thus contributed to the accessibility of formerly inaccessible places in the short run, and it did not take long before those new means of transportation managed to turn the whole world upside down. They have changed not only the speed of transportation and activities connected to transportation but have also affected interpersonal relationships, families, military activity, literature and other forms of art, and practically all aspects of human life.

The fact that technological changes can have unforeseen effects is therefore by no means a new phenomenon. If the effect of changes appears in a condensed form with immediate, robust, and frequently negative results, then societies can find themselves in critical situations. The tragic Challenger disaster serves as an example for that phenomenon, the handling of which and the acts of communication connected to it are well documented in both crisis management and crisis communication literature.

In our view, a crisis is an event that is expected to lead to an unstable, emotionally stressful, difficult, and dangerous series of situations affecting individual groups, communities, or even all of society. In our paper, we propose a framework for the study of technology-related crises that draws upon Albert Borgmann’s device paradigm and terminology.

Borgmann introduced the term Device Paradigm to explain the hidden nature and power of technological devices operating in our world. According to him, postmodern culture is infused with technological devices to such an extent that humans are incapable of perceiving how badly human life has been affected by that hidden model of living. As technological devices increase the availability of communication channels, information, and goods, they also push themselves into the background, and as a result, people do not pay attention to the destructive tendencies of everyday media use. According to Borgmann, the devices that we rely upon to provide those commodities lie hidden in the background and have a profoundly adverse effect on people’s lives. We argue that, in a series of cases, ignorance or insufficient knowledge of the device paradigm finally led to a crisis-like situation. It is worth noting that the long-term solutions of those crises have inevitably required the full adoption and understanding of the device paradigm and the development of focal things and practices as a way to mitigate the harmful effect of using postmodern communication technologies and to overcome human reliance upon them.

Crises typical of the Information Age are in some sense similar; however, a new source of crises could occur simply due to the different types of information and information collection ubiquitous today. Following Borgmann, we observe that information communicated via technology becomes a kind of rival of reality. The distant character of technological information, to use Borgmann’s term, can also be viewed as a source of new types of crisis. In this paper, we argue that several well-documented crises actually stem from the new character of technological information. By applying Bergmann’s characteristics of technological information, we identify other properties of the triggers of that new type of crisis that may serve not only as explanations of the occurrence of real crises but also as indicators of future crisis-like events that are likely to happen.

We present a more or less documented case study—Hillary Clinton’s email scandal—to show how technology-triggered crises emerge. We also identify the indicators of the crisis and lay the groundwork for a comprehensive analysis of technology-related crises supported with advice on managing them.


The Problem of Undermined Evidence

Accurate Entitlement for Epistemic Systems Within Decision Support Systems

Akos Gyarmathy

Decision support systems are programs that support a reasoning task executed behind the scenes and based on empirical data. Such systems may help us by providing procedural prescriptions by guiding decision makers to assess situations of decision making and to choose the appropriate response by converting data into a list of available options with assigned values of probability and expected utility. Although such systems apply classical Bayesian formulations to model probabilistic reasoning, the mathematical foundations of the axiomatic system of probabilistic calculations applied in the systems are seriously flawed and, in various cases, uncredible.

In this paper, I explore the major difficulties concerning classic conditionalization as a framework of conditional probability and demonstrate these difficulties on a generalizable case study of the medical decision support system ILLiad, which uses an extensive probabilistic framework.

I show that frameworks based on classical Bayesian formulations are inaccurate and that their safe use is limited to cases of decision making under certainty. However, the majority of the cases of decision-making that requires extensive probabilistic calculations takes place under uncertainty where the classical Bayesian framework is inapplicable.

The main problem that I focus on in this talk is the problem of undermined evidence. The last decade witnessed extensive debate concerning the appropriate answer to the problem of undermined evidence within the framework of advanced Bayesian formulations. I compare and assess different approaches to the problem by focusing on the epistemic aspects of the mathematical formulations of probabilistic reasoning developed in the last 50 years as putative solutions. I offer a solution for a credible way of treating evidence as uncertain by applying an accuracy-based epistemology of entitlement to inductive inference. Such an epistemology sets expected epistemic utility as the primary goal for epistemic systems applied in decision support systems. Consequently, I claim that the primary epistemic goal of an epistemic system is not a reliable way of acquiring truth but the acquaintance of safe accuracy.


The Role of Technological Knowledge in Scientific Change

Maxim Mirkin & Hakob Barseghyan

In this paper, we show that technological disciplines such as computer science, agriculture, telescope engineering, and surgery exhibit the same patterns of change as disciplines in the natural and social sciences. Specifically, we argue that changes in technological beliefs obey the laws of scientific change currently accepted in scientonomy. Those laws explain, among other things, how communal beliefs and the criteria of their evaluation change over time.

Our discussion applies a distinction between two stances that an epistemic community can take toward a theory: accept the theory as the best available description of its domain or use the theory in practical applications. Accepted theories may or may not also be used in practice, and used theories may or may not also be accepted as the best descriptions of their domains. An engineer, for example, may accept general relativity but choose to build a bridge using Newtonian physics. Whereas most scientific fields (e.g. physics, chemistry, biology, and psychology) involve both accepted and used theories, technological disciplines are traditionally seen to merely use theories without necessarily producing any accepted theories of their own. Such a view tacitly implies that technological disciplines do not obey the laws of scientific change.

However, we demonstrate that technological disciplines do not only use theories but also produce accepted theories, such as “x is an effective treatment for medical condition y”, “z is a useful technology for bridge-building”, and “p is a statistically valid technique for assessing public opinion about q”. There are both theoretical and historical reasons to believe that changes in technological knowledge follow the same laws of scientific change as changes in other fields of science. From a theoretical perspective, technological knowledge is similar to knowledge produced in the natural and social sciences, since both cases involve attempts to describe a certain domain, whether it is natural, social, or artificial. Our thesis that changes in technological knowledge obey the same laws of scientific change is illustrated by examples from the history of sorting algorithms, crop rotations, telescopes, and rectal cancer surgery.


Toward a Philosophy of Technology in the Nominative Case

Agostino Cera

My paper presents the establishment of a philosophy of technology in the nominative case: an approach that recognizes its object as the actual “subject of history”. This approach is grounded on an anthropological hypothesis (i.e., the oikological characterization of human being) and culminates in the concept of Neoenvironmentality.

The premise of my anthropological hypothesis is the epochal awareness that the so-called “essence of man” can no longer be predicated. As a result, formulas such as “human essence” or “human nature” are replaced in my work by that of anthropic perimeter (i.e. the set of conditions that define the oikological horizon within which man can recognize himself as such). With the transition from natura hominis to conditio humana, the human being can be understood only oikologically, namely on the basis of the relationship he establishes with his own vital space (oikos).

Due to his lacking biological endowment, human being is bound by nature to mould his own oikos. I define this natural human feature worldhood, by reference to Jakob von Uexküll’s Umweltlehre and to its distinction between man and animal, in which the former has a world (Welt) and the latter has an environment (Umwelt). It is possible to agree with Heidegger when he says that the peculiarity of man is his “world-forming” ability. Being world-forming, he is a naturally technological being. On the contrary, the oikological niche of the animal is its environment: a natural mould with which it corresponds completely and immediately. The peculiarity of the animal thus consists in its environmentality.

Following Heidegger’s suggestion, I place both human worldhood and animal environmentality in a pathic presupposition: in those fundamental moods (Grundstimmungen) that refer each of them to their respective findingness (Befindlichkeit).

In the case of the animal, such pathos corresponds to captivation (Benommenheit): an absorption in itself (apatheia) that upholds its fusion with its own vital space. By contrast, man possesses a Grundstimmung, which enables him to transcend his withintheworld rootedness. It is thaumazein or theorein (contemplation).

Given that anthropological premise, technology emerges as the oikos for the current humanity. As a consequence “technology indicates here the worldview that has made single technologies possible and that manifests as a very particular historical circumstance: the synthesis of disenchantment (Entzauberung) and rationalization (Rationalisierung) under the imperative of makeability (Machbarkeit).

In that systematic guise, technology demands the complete adaptation of human being and, to that end, inhibits his original pathos (thaumazein, theorein), by replacing it with an artificial captivation, which assimilates him, on the pathic level, to the animal condition. Technology thus emerges as the current human oikos in the form of a (neo)environment. Within such a framework the world becomes “overmanned”, i.e. that its challenging can be tolerated by man in a state of apatheia only.

On the basis of my oikological approach, when human being’s worldly framework takes on environmental features, he experiences an ‘animal positionality’.Therefore the primary effect oftechnological neoenvironmentality is the feralization of man.

Here and now, the telos of techne turns out to be the complete redefinition of the anthropic perimeter.


Technology as an Aspect of Human Praxis

László Ropolyi 

This paper proposes a specific approach to understanding the nature of technology that encompasses the entire field of technological praxis, from the making of primitive tools to using the Internet. In that approach, technology is a specific form of human agency that yields to an imperfect realization of human control over a technological situation—that is, a situation not governed to an end by natural constraints but by specific human aims. The components of such technological situations are a given collection of natural or artificial beings, humans, human aims, and situation-bound tools. By performing technological situation analysis, the essential form of tool making, the complex system of relationships between science and technology, technological practices with and without machines, the finiteness or imperfectness of any technology, and engineering (i.e., the possibility of the creation of situations) can be considered.

For a better characterization of the approach to technology, the paper also presents a comparison of philosophies of technology. Following Feenberg’s comparative analysis, the so-called fundamental question of the philosophy of technology is formulated, and its two sides are identified. The existence of such a fundamental question demands that every philosophy of technology has to declare its position in the relationship between technology and society. On the one hand, it is necessary to choose between the autonomous or non-autonomous (i.e., human-controlled) existence of technology; on the other, it is necessary to be for or against the value-laden nature of technology.

As a consequence of that analysis, we characterize our approach to technology as an alternative to typically accepted views, primarily due to its extremely general and abstract views on technological praxis. In our approach, all human praxis can be considered to be technological; more precisely, every human activity has a technological aspect or dimension. That quality means that every human practice yields to an – imperfect – realization of human control over a situation; the situation is not governed to an end by natural constraints but by specific human aims. Of course, human practice is not identical with technological praxis; it evidently has many another aspects, but every practice has a technological aspect. Moreover, every human situation can be considered to be a technological one; every human being is a technological agent; every human aim is attainable by a specific technology; and every human tool can be considered to be a situation-bound technological tool. The technological aspect of human practice embodies human defenselessness and human commitment to the successful control over the situations of human life. Without such obviously partial success, we would not survive as human beings but return to natural situations as natural – animal – beings. In that way, every technology is a technology of humanity, and human beings, the human world, and human cultures and societies are equally products of technologies. Technology is the only tool for human self-creation, the branches of which can be associated with families of life situations. Different economic, legal, psychical, social, cultural, material, and mechanical technologies serve humans’ self-creating praxis. In that sense, engineering can be considered to be a meta-technological activity: a specific practice of handling the components of technological situations with the aim of cultivating controllable situations in a given environment.

In that view of technology, the fundamental question of philosophy of technology can be answered considering the consequences of the inseparability of human praxis and technology. From that view, technology clearly has no autonomous existence and is value-laden. However, the context of its practical and cultural embeddedness has changed from the social system to the human life-world, with the hope of achieving a Habermasian synthesis of the two.


Technology, modernity and democracy: critical theory and agency

Eduardo Beira

1. "Only a god can save us" (not philosophy or any purely human reflection and endeavor); these are the well-known words of Heidegger in his famous 1966 last interview with Der Spiegel,  published in 1976, after his death. In the same interview, he argued again that planetary “technicity in its essence is something that man does not master by his own power”, and that “up to the present we have not yet found a way to respond to the essence of technicity”. In order to save man from “planetary technicity”, man would need to find again the irreversibly lost essence of technology (and things in the world). Heidegger could not envisage how that could be feasible: “I know of no way to change the present state of the world immediately” he said in this interview. But one of the few words of hope from Heidegger in this interview was about the future of his own thought:  “How far I come with my own effort at thought and in what way it will be received in the future and fruitfully transformed ...“ (my italics).

Heidegger informed and inspired the next generations of critical thinkers. Nevertheless, dystopian and utopian visions about technology and the world continued to be produced. Often dystopian visions offer only dead-end pessimism about the future of humanity in a technologically-driven society.

Andrew Feenberg’s critical theory (of technology) I consider “fruitfully transformed” thought about technology and society, along lines that  Heidegger suggested. It is an effort to recover or rebuild a modern concept of essence that can overthrow the deadlock, inside the continuing thread of the critical thinking tradition.  But Feenberg’s effort also has a strong bearing on the constructivist insights and contributions of recent “science and technology studies”.  

“The concept of essence needs to be reconstructed and revived”, Feenberg concluded near the end of Between reason and experience. Essays in technology and modernity (2010). His proposal is to reconstruct and revive through “life affirmation as an existential category”. He argues that “a reason that incorporates the affirmation of life in its structure is in harmony with the nature of things in a way that value neutral reason is not” (p. 207). This implies a revaluation of the role and importance of experience in the modern world relative to the scientific picture of disenchanted nature. Experience can reveal more than a “one dimensional” reality (Marcuse) that science can not apprehend in its present form and can contribute to closing the gap between existence and essence through a new (re)designed technology responsive to values.

The dialectics between nature and society, reason and experience, function and meaning is mediated by technology. Technology is not deterministic; it is a product of social conversations (even a “social battlefield”) between different kind of actors in the world of life and it may evolve from the pure cold “scientific” rationality to be a technology rich in the meanings (the “art” of technology) of communities, inspired by its affordances in the everyday life in the world. The integration of the new affordances revealed by experience is the product of increasing social activism through micropolitics in the communities of life, as well as through traditional mechanisms of democratic politics. A reconstructed democratic rationality is more than traditional social rationality. A rational critic of social rationalism must go beyond it and incorporate technical citizenship as a democratic contribution to technology’s (re)design.

2. Recent decades have shown the increasing role and power of technical citizenship in very different áreas. Feenberg has explored them in his career. He argues they offer a new hope for a future where values and meanings may be increasingly embedded in technology (re)design. We can make reason and experience converge again, after a long divergence where meaning and essences were lost, but we need a new social and democratic framework for (social) construction of technology and the world that may unveil new essences.

Technology has changed dramatically in recent decades. Heidegger’s pessimism in the sixties of the last century can be explained in terms of the closed and inflexible nature of technology (and associated “planetary technicity”) in his time. But subsequently a dramatic change began to emerge and continues to develop: technology became increasingly digital - programmable, flexible, with powerful sensing and reacting features, small (at times very very small indeed), networked (more often by non-material linkages), clouded (and fogged) in increasingly cheaper computing devices. These novelties are changing “the big old technology” and making change and (re)design easier and more participative. Heidegger never understood the affordances of computers, what computing could be in the future, and he was not alone . He (as well as Marcuse) only knew the first generation of “mainframes”.

Now we can see an opening opportunity for an alternative modernity based in technology through an extended democratic rationality that can promote flourishing through new affordances. Nothing is assured, but at least humans, can have an opportunity to discover and to fight for an alternative modernity mediated by an alternative technology. Not only “a god can save us” - we can perhaps save ourselves and the world. There are no guarantees, but we have room for hope. Democratic agency in technology politics may be able to overrule  the casualty-based technology attentive only to efficiency and arid rational calculus.

3. This talk discusses Andrew Feenberg’s critical theory of technology, its main arguments, and how it fits within the complex constellation of critical thought in the context of western post-modernism. Feenberg is looking for a hermeneutics of technology which is needed to articulate dimensions of meaning in technology and how meaning  relates to functionality: he seeks a new entanglement between means and function, reason and experience.

I will focus discussion on two main issues:  (i) the interactions between reason and experience and (ii) agency and technical citizenship, as well as some implications for public policies.