Wednesday, 28 March 2018

On the Ethics of Artificial Intelligence

by Denis Larrivee

Complex technologies are shaping the manner by which we frame relations with an external world, as Heidegger forewarned, not only with its mechanical materiality, but socially in the manner of our interactions with others, and inwardly in how we ourselves are changed. New technologies pose questions about the nature of technology, human nature, and the relation between the two, and so how we view technology and what we wish it do to and for ourselves. These concerns are awakening broader interest in the technical communities that are their source. The ‘Ethically Aligned Design project documents concern thinking by members of the IEEE community on the impact of artificial intelligence on human wellbeing (see multiple topics within). Significantly its motivations bring to the fore the centrality of the human being in a classical anthropocentric perspective that has old thinking in the role of handmaiden for a revival from the new Manicheism.’

A Vision for Prioritizing Human Wellbeing 
with Artificial Intelligence and Autonomous Systems

Institute of Electrical and Electronic Engineers (IEEE)

Sunday, 25 March 2018

Or even, Ethics: not just an afterthought with principles so self-evident that there’s no need to think too hard about them because basically anyone can work it out even if they’ve never even studied philosophy before…

by Simon Smith

Last time, we concluded with the rather bad-tempered assertion that neuroethicists, and by extension all other practitioners of applied ethics, should not be allowed to get away with imagining that their training in their original discipline is sufficient to qualify them to make substantive moral judgements about it. Indeed, it is one thing for normal people to pass moral judgement on a whole range of issues about which their knowledge is lacking in detail. It is, however, another thing altogether for supposedly serious scholars to do so.
I would not be able to get away with pontificating on the metaphysical implications of Soft Matter Physics without being able to demonstrate, at the drop of a scientific hat, that I knew a good deal about Soft Matter Physics. This, it seems to me, is a good thing, since it is only by being able to demonstrate such knowledge that anyone could be sure of the quality of my work. This is the essence of peer review and, as such, it is essential to progress in any and every field of study. A scholar who insists on holding forth on a subject in which he or she is not trained runs the risk of making grave errors, or worse, being regarded as a charlatan.
Fortunately, as mentioned before, we have Drs Beauregard and Larrivee to fight that fight and remind their neuro-colleagues that there’s more to philosophical thinking than meets their unphilosophical eye.
And so we come back to the central point. That is, whether neuroethics, or business ethics, or ethics and the computer, or personal ethics, or any other kind of ethics is really anything more than just ethics. Do these worlds of human activity really add anything or change anything when it comes to moral thinking? To put the point a little differently, have these modes of thought and action actually created any new moral problems for us to think about? Or are we, in fact, simply faced with the same old questions about how we relate to one another, how treat one another; how we govern our actions and our impact on others, how we think about those actions and their impact?
Coming back to the matter of technology, where the moral point is often at its sharpest, we can say that our reach and the range of opportunities for behaving badly has certainly been hugely extended. I can now threaten and abuse people all over the world without leaving my nice little study. But if we consider, for example, the recent reports of online abuse suffered by female politicians, it seems as though the question facing those who would indulge in such abuse is the same as ever: are you going to be a massive bell-end? Well, are you?
Oh, you are.
Of course, those who choose the way of the bell-end are very likely bell-ends anyway. They may have found another outlet for their bell-endery, but the bell-end was always there, always a part of them. And, as if explicit bell-end-itude isn’t enough – as it surely is – our modern bell-ends have left us feeling something like sympathy for the politicians who suffer it. I don’t mind feeling sorry for ordinary human beings who suffer unwarranted abuse, but I resent, deeply resent, being made to feel sorry for politicians who cop it.
In the end, it’s not at all clear whether or not the context, however complex and difficult it may be, really adds anything substantial to the development and working-through of ethical theories. The context may be of vital importance, but the ethical principles we apply will always be the same; they must be if they are to be genuine ethical principles. To change one’s morals depending on whether one is conducting business deals, spending time with one’s family and friends, or analysing MRI scans, is a sure sign that moral thinking as gone awry. It boils down to something like this: how I treat people depends on what I want from them, be that money, company, research material, or what have you. Whatever else this might be, it is not moral thinking.
Moral thinking depends, not upon the kind of activity we’re engaged in, or what we want to gain from those others who are engaged in it with us. It is about doing, obviously, but not the particular kind of doing. Rather, moral thinking depends, to some degree, on whether or not we are persons; which is to say, it depends on whether or not we’re the kind of creatures which can be charged with deliberate action and, therefore, at least some degree of responsibility. Are we, moreover, the kind of creatures to which dignity – as problematic as that notion may be – can fairly be imputed? Are we capable of flourishing or developing, in some sense; and are we capable of participating both constructively and destructively in that development? If we are any of these things, then moral demands and obligations will inevitably lie upon us.
Given its essentially social nature, the moral point here is not simply a matter of whether or not that with which I am interacting is a person or an object, someone, as Spaemann would say, to be recognised and engaged with or something to be utilised for some end. The dialectical or interconstitutive structure of ‘personhood’ means that how we interact with others and objects has significant consequences for what we are. Put another way, it’s not simply a matter of what my ‘other’ is, someone or something, that determines how I ought to treat them. This makes morality sound terribly self-centred, potentially sliding into some kind of Cartesian ego-isolationism. That is not at all what I’m driving at. It’s much to long and windy to go into here, but it ties up with a concept of the self that is essentially constituted by the other, that constructs itself in the image of the other, and so by the very nature of its existence is other-oriented or participatory.
Here, however, moving morality away from the question of identity (what someone or something is) was meant to convey a much more basic idea. It’s simply that, drawing a more or less clear-cut distinction between human and non-human animals is not a justification for cruelty or lack of care.
The underlying structure of this kind of moral thinking is obviously Kantian. The influence of Kantian ethics on personalism is hardly surprising, given the emphasis on duty and obligation, and the straight-up opposition to any kind of utilitarianism. That said, it is worth bearing in mind that many personalists regard the absolute reliance on reason, at the expense of and in radical opposition to emotions, to be profoundly unsatisfactory. Consider this, suppose I tell my wife that my fidelity to her is born, not of love and affection, but from an obligation to abide by the moral law within. Suppose I say that to do otherwise would fail the test of universalizability, committing me to willing a contradiction, and setting my face against reason itself. Not bloody likely.
However, this too, is not something I want to get into here; another time, perhaps. The point here is only to suggest that subdivision within the study of ethics do not, I think, bring anything new to the table. In doing the ethics of business or technology or what have you, we are ultimately just doing ethics. That’s not to say these subdivisions are not interesting or, in their own way, useful. Given the necessarily practical nature of ethics, it is always going to be a good idea to think through one’s principles in relation to concrete circumstances and real-life problems. And there are always so many of those.  In the end, however, ethics is always nothing more or less than ethics, no matter the circumstances.
At least, that’s what I always thought….


Saturday, 24 March 2018

Inscriptions, Vol. 1, No. 1: 2nd Call for Papers

Consecrations: The Philosophy of Wolfgang Schirmacher and the Passing of the Human

Inscriptions, a journal of contemporary thinking on art, philosophy, and psycho-analysis, invites contributions to our inaugural issue on consecration, the philosophy of Wolfgang Schirmacher, and the notion of passing. We are looking for well-crafted and skilfully written scholarly essays and literary contributions that engage our mandate and the theme of this issue.
Passing: to pass for someone, to pass something by, to pass away; these are senses in which the term passing can be made meaningful in our lives, and in how we approach our lives in art. In psycho-analysis hold it possible to act in such a way that the subject leaves the domain of legal competency and enter into a state where we no longer can be held accountable for our acts, as well as protocols for the end of analysis, i.e., when the analysand passes into the analyst. In philosophy the term seems particularly apt as a description of the way we move from the world of humanism to that which lies beyond it. To Wolfgang Schirmacher the notion of Homo generator serves to address the uncertainties of our epoch of modern technology. It is a form that generates human reality in a climate of artificiality and ecocide. Homo generator is a media artist promising a Dasein without a need for Being, certainty or simple notions of progress. The lighting of truth (Heidegger) promised by Homo generator is supplemented by an art of forgetting: only in this manner can the media artist’s sanity remain in place.
Homo generator gives shape to just living under the aegis of an ethic that is forgotten or hidden: “Concealed from our consciousness, humans live ethically, a good life behind our backs. Only in feelings, in fascination, satisfaction, joy, but also in mourning do we get a hint of ethical worlds” (Schirmacher, “Cloning humans with media,” 2000). As with Heidegger’s claim that the light of consciousness needs to be shielded, Schirmacher’s ethical life-worlds are at their most present when they are hidden from view. This leads to a most unexpected thesis: that which we consecrate stands out as most worthy when it is hidden.

We seek academic papers and literary interventions that address questions such as:
  • How can the term consecration make sense to art, literature, and philosophy?
  • In what way can the work of Wolfgang Schirmacher, such as his figure of Homo generator, give reality to our epoch and our lived experiences?
  • How can the term “passing” yield meaning in our approaches to art, psycho-analysis, and philosophy?

Submission instructions
  • Deadline for proposals: 15 March 2018
  • Deadline for full manuscripts: 15 April 2018

Academic essays should be 3,000 to 4,500 words. We also seek scholarship in the form of interviews, reviews, short interventions, disputations and rebuffals, and in these cases we are open to shorter texts. Inscriptions adheres to the Chicago Manual of Style (footnotes and bibliography). For other instructions, please see our website. We encourage potential authors to submit proposals for review prior to their writing/submitting entire full-length manuscripts. Include title, proposal (150 words), short biography, and institutional affiliation in your preliminary submission.

All academic submissions will undergo double-blind peer review.

Literary submissions (short and long poems, aphorisms, short fiction, fables and literary essays of up to 1800 words) will be reviewed by our fiction editor.

Submit proposals and literary fictions through our online platform at:

Torgeir Fjeld, PhD
Editor-in-Chief, Inscriptions

Sunday, 18 March 2018

Or how about, Ethics: Not Just One of 18 First-Clath County Clubth within the Domethtic Cricket Thtructure of England and Waleth


by Simon Smith

Once again, combining a deeply and dangerously repressed nature with heavy medication has done the trick. Those chthonic memories of sex in the classroom are, once again, back in their mouldy old oblong boxes, where they belong. 
The point I was trying to get at last time – and missed by a country mile – is whether or not the specialisations and fragmentations of academic philosophy, and especially ethics, really amount to very much. Do these divisions demarcate a boundary line between kinds of principles and ways of thinking, or are they just contextual markers? In either case, it seems fair to say that, if nothing else, these divisions serve to focus the attention on some of the most basic moral questions, such as “is it acceptable to treat people as though they were objects in some way?” (In case you’re wondering, the answer is “no”; and if you are wondering, what on earth are you doing here?)
Take, for example, the relatively new and relatively exciting world of neuroethics. Now, this is a field that our friends Drs Beauregard and Larrivee have been ploughing for some considerable time and, naturally, we bow to their expertise in all matters pertaining thereto. Even if we leave aside all the frankly nonsensical claims that MRI scanners can read minds, the modern neurosciences still do throw up a number of serious ethical issues. Not a few of them seem to come back, ultimately, to the compatibility of neurosciences’ flattened materialism with the moral demands supposedly being made upon it.
I say “supposedly” because I’m reasonably confident that even the most hard-bitten materialist – he who denies the reality of anything other than physical forces colliding and conditioning one another all over the place – could be delivered of a genuine moral response with sufficient provocation. One would imagine that almost any item in the news at the moment would do the trick; but that might prove too abstract. Publicly declaring him or her to be an unprincipled, plagiarising purveyor of voodoo and snake oil might do it. Or you could just call him or her a ****.
The question, however, remains: ‘is neuroethics a real specialism?’ The argument seems to be that it must be, because, in order to do it properly, one has to know an enormous amount complicated neuroscience. But is that really true, I wonder. It depends, I suppose, on exactly what we mean when we say it.
On the one hand, the question seems to be whether we need to know and understand all the neuroscience – which, I’m assured by Dr. B, is very complicated indeed – in order to understand the moral issues and do the moral thinking required. Insist that we do, that sound moral thinking here depends on detailed knowledge of the neurological context, and the neuroscientifically uninitiated are surely entitled to ask precisely what and where are the special moral problems which only neuroscientists can grapple with. On present showing they don’t seem too much in evidence. More seriously, perhaps, given the tendency of neuroscientists to turn up in law courts, what special moral training does the neuroscientist have and where did he or she come by it? Philosophy in general and ethics in particular is not, as a rule, a part of the average neuroscientists’ education. No one doubts that many a neuroethicist is pretty hot stuff, neuroscientifically speaking – no one, except, perhaps other neuroethicists. But what, then, makes them so sure that they know what they’re talking about, when it comes to morality?
On the other hand, the question might be whether we need to know all the complicated neuroscience in order to identify the moral issues and fully understand their implications. A “yes” here would sound a lot more plausible. Obviously, it’s difficult to see or even imagine what problems might be raised in this context if one has no clue as to what actually goes on within it. Certainly, Dr. B has intimated to me that this is a common view among the initiated.
And yet, it doesn’t really change anything. It is, I think, still quite reasonable to ask what special moral training the neuroethicist has had which enables him or her to spot neuroethical problems and then tackle them. If, as often seems to be the case, the answer is “none at all”, then even as we admire the neuroethicist’s gung-ho attitude in being prepared to take on questions he or she is singularly unequipped to take on, we cannot help wondering what makes them so sure that their neuroethical questions really are so special after all.
One might even suppose, if one were being especially cynical, that neuroethics as a distinct discipline and, above all, publishing opportunity, has arisen owing to the spectacular inability of its practitioners to articulate their moral questions with sufficient clarity.
Chilly misanthropy towards fellow scholars is unbecoming, however; and does no one any good. The point here is not to suggest that the learning needed to understand the neurological context can simply be dispensed with, any more than the context itself can. Nevertheless, it should be clearly understood – more clearly than it evidently is – that Ethics is a field of scholarly enquiry in its own right, one that requires concentrated and lengthy study if it is to be understood in any depth and applied to any real purpose or value.  The application of moral reasoning to another context of scholarly research takes considerably more than the homespun common sense of even the most down to earth neuroscientist, if, that is, neuroethicists are to avoid talking a load of old toot.


Thursday, 8 March 2018

An Introduction to Personalism: Abstract

by Benjamin Wilkinson


Personalism has been one of the most fruitful endeavors on the contemporary philosophical scene. But while much has been written about the individual personalist philosophers, few studies exist about the personalist movement as a whole. Introduction to personalism is a book to fill that gap.
The tragedies of two World Wars, the Great Depression, and the totalitarian regimes of the 1930s are the historical context in which personalism arose. Juan Manuel Burgos shows the reader how personalist philosophers responded to these horrific events through a revitalization of the concept of person, developing a philosophy both rooted in the best of the intellectual tradition and capable of dialoguing with the vanguard of contemporary thought.
Burgos then delves into the potent ideas of more than twenty thinkers who have contributed to the growth of personalism. The reader will find such distinguished names as Maritain, Mounier, von Hildebrand, Wojtyła, Guardini, Marcel, Stein, Buber, Levinas, Zubiri, and Polanyi. Burgos’ encyclopedic knowledge of recent philosophy allows for a concise and well-rounded perspective on each of the personalists studied.
Introduction to personalism concludes with a synthesis of personalist thought, bringing together the brightest insights of each personalist philosopher into an organic whole. Burgos argues that personalism is not an eclectic hodge-podge, but a full-fledged school of philosophy, and he proves it through a dynamic and rigorous exposition of the key features of the personalist position.
Our times are marked by numerous and often contradictory ideas about the human person. Introduction to personalism presents an engaging anthropological vision capable of taking the lead in the debate about the meaning of human existence and winning hearts and minds for the cause of the dignity of each and every person in the 21st century and beyond. If you want to join the effort, this is the book for you.

Sunday, 4 March 2018

I mean, Ethics: Not Just a County Next to Thuffolk

by Simon Smith
The flashbacks are finally beginning to fade. That boiling sea of faces, at once horrified and fascinated, witnesses to a pedagogic train crash, no longer haunt my dreams. That succubus of sex and ethics, demon of a Thursday afternoon in late November, is banished once again; and in its wake, our thoughts turn, once again, to the matter of philosophical fragmentations.
Just before that forest of wildly thrashing limbs, exercised frantically in an indescribable series of arcane gesticulations, blotted out the sun, the point we had reached, or almost reached, was that ethics is ethics, no matter the context or point of application. This is not to say that context adds nothing at all to our moral reasoning; only that, if our basic principle is, say, not to harm others, then that ought to be upheld no matter what the circumstances. Of course, the circumstances will, to some degree, dictate the manner in which our principle is upheld; the specific way in which we avoid harming others will depend largely on what we are doing at the time.
One field of doings, as it were, which always seems to face greater demands for moral scrutiny is technology. And rightly so, one might say, given the faith we place in it to solve all our problems and make us happy, despite the lack of evidence that it can do any such thing. There is, as Susan Pinker shows in The Village Effect (Atlantic Books, 2014), very little evidence to support the commonly accepted dogma concerning the value of exposing children to computer technology at the earliest possible opportunity; quite the opposite, in fact. Given how easy touch-screens are to use – easy enough for most adults to get to grips with – it’s not clear what advantage there might be for a three-year old to be able to do likewise rather than, say, active reading with a real human being.
One especially obvious, because potentially dramatic, field which seems to demand some hard moral thinking is that of Artificial Intelligence. As it happens, I don’t hold any special fears regarding the “rise of the machines”, either for them or us. Even if the Dotard in Chief hasn’t actually sucked the world into a nuclear firestorm long before it becomes an issue, the application of intelligence, artificial or otherwise, might well be a good idea. We’ve tried everything else and look where we’ve ended up. All around the world, people live and die in poverty; wars are fought over nothing very worthwhile and, wherever possible, ignored; mendacity, corruption, and outright criminality remain exceedingly popular; there’s every chance that we are polluting the world beyond habitability; and the Americans have a game-show host for a president. Surely now, more than at any other time in history, intelligence, any intelligence, has to be worth a try.
That said, the sheer temperamentally of much modern technology, downright obtuseness on occasion, do sometimes make me wonder.
Speaking philosophically, which is to say, less cynically, it remains to be seen whether the conception of intelligence which lies at the heart of AI research will ever be rich enough to present any real moral concerns. At present, it seems rather too nebulous to warrant the kind of alarm some people – including made-up people like Elon Musk and the Pope – are keen to generate. Where specificity has crept into the conceptualising, the analogical extension of “intelligence” has been degraded to the point where it’s hardly recognisable. More worrying is the way that degraded analogy is projected back upon its original source: i.e. the persons who constructed it in the first place.
During the latter part of the 20th Century, it became fashionable to anthropomorphise computer technology, in direct contravention, nota bene, of IBM’s own code of conduct. Despite the rampant anthropomorphism, there remains a world of difference between “memory” and “data storage”, a difference which carelessness and linguistic ineptitude have all but erased.
This marks a dismal, though not particularly surprising, failure to understand how analogies work. It is, sadly, a symptom of the naïve realism that continues to pollute so much modern thought. (Naïve but never innocent, no matter what Peter Byrne thinks, never innocent; realism is always, ultimately, pernicious.) At the root of it is the assumption that, between analogue and analogate, there is a simple one-to-one correspondence: x is like y, therefore y is like x to the same degree and in the same way.
That’s “analogate”, by the way, not, as I assumed for many years, “analgate”; that, I suspect, would require a very different theory of language. And lubricant, lots of lubricant.
So x is to y as y is to x. Well yes, of course. Or rather, no, not really. That’s not how analogies work. Analogies aren’t mirror-images; they refract when they, or rather we, reflect. It is just possible that my love really is like a red, red rose (prickly and covered in greenfly); and I’m sure you bear more than a passing resemblance to a summer’s day (hot, sweaty, and buzzing with flies). Such analogies cannot simply be reversed, however. It would be a different thing entirely to suggest that a particular Tuesday afternoon in August was just like you.
So much seems perfectly obvious; except that it clearly isn’t, otherwise we wouldn’t be so quick to forget it. What began as a way of describing computers – data storage, it works a bit like our memory; so all the stuff you put in, the computer sort of “remembers” it – has turned back upon us. It’s like the threefold law of return in medieval witchcraft; language is a lot like magic in many ways. A computer’s data storage works a bit like our memory, therefore memory works a bit like data storage. Well the logic is sound enough but the idea is still cobblers. (A prime example, here, of how logic can be sound without actually being correct; demands for what necessarily must be so often are though.)
Cobblers, it may be, but there we are, stuck with the now widely held supposition that our memories, and by extension our brains, work like computers. The next step is a simple one: we just need to let those rather vague, allusive words slip out of sight; “sort of” and “a bit” goes first, of course, then “like”. No matter that these are what made the analogy work in the first place. Once they’ve been properly sublimated and repressed, we can conveniently forget that we were ever talking analogically at all and pretend that we’re really being terribly precise and scientific and technical. My, aren’t we clever?
Well, no, not really, because along with forgetting that we started out talking analogically, we’ve also, quite carefully, forgotten what we did to make the analogy work in the first place. It is unlikely that the clever dick who came up with the analogy between brains and computers actually meant it to be taken literally; the idea that those who first heard it would have taken it that way seems highly improbable.
In order to make the analogy work at all, we had to strip it down to its bare essentials, pull out all those bits that made it a genuine human experience. Stuff like consciousness and agency, all the various ways in which we ourselves are involved in our recollections; then there’s the whole web of connections, interlaced ideas and images, not to mention the way all our senses can get involved, sounds and smells can be especially evocative. And what about how we remember things? Not just as bits of data, that’s for sure; more like sudden flashes, lightening in a thunderstorm; or images of people, places, and events; sometimes a memory is a whole narrative, rich with layers of interpretation and personal perspective, wherever our attention, our consciousness came most sharply into focus. Sometimes it’s just a feeling. Just? Oh hardly that. Do you remember the smell of school dinners? Or the Sunday night feeling you used to get when you were a child? What about the lead-up to Christmas in years gone by? Or when you had to say “goodbye” to someone terribly, terribly important to you. Not “just”, oh no.
The point here is that what we remember and how we remember are vastly more complex than what a computer does with its data. In order to apply the notion of “memory” to a machine, or any other object for that matter, all the human, all the personal, aspects of it have to be stripped away.
Of course, there’s nothing wrong with doing that, it can be a very informative process, especially when it comes to diagrammatising the physical universe – oh yes, and just where did you think all those images of energy and process and function actually came from? Indeed they did not; Hume was absolutely spot on there.
That doesn’t change the fact that the extension of (severely watered down) personal analogies is an incredibly informative and therefore valuable process. Only, it can become a problem if we forget that we were talking analogically in the first place, if we mistake our analogies for literal truths. Obviously, doing that in this case wouldn’t make the slightest bit of sense. Computers don’t do all the things we do when we remember things. Of course they don’t. Unless we convince ourselves that the stripped down analogy of memory is all that memory really is, in and of itself; unless, that is, we take that stripped down analogy and return it unto its source, re-apply it to the place from which it came. Then, we might be making a serious mistake.
There, if you like, is the real moral problem at the heart of modern technology: the ease with which it allows us, even encourages us, to objectify ourselves and others, to treat real human beings as machines. And what a very old problem that is; Descartes’ legacy inviting us to step outside the world of real experience and reduce everything – and everyone – left inside to automatons. As observers and describers, we, of course, remain above such crude mechanics. But here’s the new twist: we don’t just treat people as things, actual objects that we might, just possibly, have to encounter in the real world and so think about how we conceive of them. Now we treat them, and ourselves, as idealised objects. We’ve become our own diagrammatic fictions. And all in the name of science, of truth, of the advancement of human knowledge. How very clever of us.
But can that really be right? Those realities that can only be seen and understood, which, frequently, can only be thought, by means of analogies and diagrams, can they be more true, more real than the human beings we encounter everyday? Can they be more real than the people who taught us to speak and think and construct diagrams in the first place?
Mathematics is the language of the sciences, it provides access to the to the world as it really is, allegedly. Mathematics is a logical system of signs and symbols. Those symbols have no natural or actual corollary in the world; if they did, they’d be no good to science. What, I wonder, makes us so sure that a world which can only be conceived of and made sense through those signs and symbols is more real than the people who sit across from us at the dinner table?
Oh dear, I seem to have digressed quite lamentably and given myself a case of the vapours in doing so. We started out well enough, ruminating on the fragmentation of philosophy, specifically ethics. Now here we are grumbling about the apparently universal failure to understand how analogies work and the objectification of people that follows. Quite evidently, our ruminations lack proper focus and attention; a consequence and continued effect, no doubt, of having taught classes on sex and ethics. Post-Traumatic Stress Disorder is, I believe, the right phrase.
We shall, once again, attempt to return to the matter in hand once all thoughts of talking about sex in front of sixty first-year undergraduates have returned to the dark and shadowy places of the human psyche.

…No! It’s in the trees! It’s coming!
  

Thursday, 1 March 2018

New Book by Juan Manuel Burgos

The Catholic University of America Press presents

An Introduction to Personalism
by 
Juan Manuel Burgos

Much has been written about the great personalist philosophers of the 20th century – including Jacques Maritain and Emmanuel Mounier, Martin Buber and Emmanuel Levinas, Dietrich von Hildebrand and Edith Stein, Max Scheler and Karol Wojtyla but few books cover the personalist movement as a whole. An Introduction to Personalism fills that gap. Juan Manuel Burgos shows the reader how personalist philosophy was born in response to the tragedies of two World Wars, the Great Depression, and the totalitarian regimes of the 1930s. Through a revitalization of the concept of the person, an array of thinkers developed a philosophy both rooted in the best of the intellectual tradition and capable of dialoguing with contemporary concerns.
Our times are marked by numerous and often contradictory ideas about the human person. An Introduction to Personalism presents an engaging anthropological vision capable of taking the lead in the debate about the meaning of human existence and of winning hearts and minds for the cause of the dignity of every person in the 21st century and beyond. 

JUAN MANUEL BURGOS is professor of philosophy at the University CEU- San Pablo (Madrid, Spain), president of the Asociación Española de Personalismo (www.personalismo.org) and of the Asociación Iberoamericana de Personalismo (www.aipersonalismo.org).

“I know of no comparable text in English for providing a systematic overview of the personalist movement in philosophy. Burgos addresses all the major and most of the minor figures in personalism, introducing not only those generally known to Americans (such as Scheler, Mounier and Maritain) but also important figures in Spain and Poland. The book is more than simply a history, however, as in the concluding section Burgos offers his own proposal for a well-developed personalist philosophy.” – Adrian Reimers (University of Notre Dame)

Available From