Prometheus
Bound (detail), begun c. 1611–18, by Peter Paul Rubens (Flemish,
1577–1640) and Frans Snyders (Flemish, 1579–1657)
Aeschylus, Prometheus Bound
….
§ 435
PROMETHEUS:Don’t think
that it’s out of delicacy or willfulness
That I’m silent. My heart is devoured with anxiety
When I see myself being insulted like this.
And
yet, who else but I completely determined
Their privileges to these new gods?
But that’s enough of that --- you know The story I could
tell you. Now listen to the plight
Of human beings, how they were childish before,
And I made them intelligent and
possessed of mind. I’ll tell you about them, not because I
blame them at all,
But to explain the kindness I granted them. First of all, though
they could see, they saw to no purpose,
Though hearing they didn’t hear, but like
Shapes in dreams in their long life
They muddled up everything at random.They didn’t know
Brick-built sunny houses, or word-working;
But like ants that burrow, light as air,
They lived in the sunless corners of caves.
They
had no reliable sign of winter,
Flowering spring, or fruitful Summer ---they did everything
Without intelligence, until I showed them
When the stars rose, and their settings that are hard to tell.
I also invented for them numbers, The most
outstanding cleverness, and how to put letters together, The recording of
everything,working
mother of the muses.
I first yoked the wild beasts Enslaving them in
harnesses and in pack-saddles, so that
People might have a relief from their
heavy
Burdens, and brought under chariots rein-loving
Horses, the adornment of proud wealth.
And no-one else but me invented the sea-wandering
Fabric-winged vehicles of ships.
Although
I invented all these devices for mortals,
Alas! I myself do not
have a clever means whereby
I can escape my present distress.
…
(From Seven Tragedies, translated by Herbert Weir Smyth (1857-1937),
from the Loeb edition of 1926, now in the public domain, with thanks to
www.theoi.com and the Perseus Project for making the text available online. https://topostext.org/work/15)
Characters: PROMETHEUS, cousin of ZEUS, a Titan HEPHAESTUS, son of ZEUS, an
Olympian POWER
(Kratos) and FORCE (Bia), servants of ZEUS OCEAN, uncle of
Prometheus, a Titan (also uncle of ZEUS) IO, daughter of
Inachus HERMES, Son of ZEUS, an
Olympian CHORUS of
OCEANIDS, daughters of OCEAN and cousins of Prometheus.
NOTE: The Ontology of Digitalism is the title of coming book by Veysel Batmaz ...
2/15/25
@
Via digitalism, are USA President Donald Trump & his digital musketeer Elon Musk facilitating to bury capitalism [= an upper layer above the market economy (*)] with the “USA coffin" to the graveyard of history, inadvertently or bluntly?
Let’s have a discussion...
Please first readDigitalism vs.
Capitalismby Veysel Batmaz... But before you read, you have
definitely and kindly to order it if you are interested.https://www.amazon.com/dp/B0D9SJ3XSL
(*)“Fernand
Braudel explains that the 15th to 18th
centuries offer very important theoretical insights to understand how and why
capitalism is born in Europe. One of his main conclusions is that contrary to
the common view that “capitalism” is synonymous with “free market economy,"
Braudel argues that these are not only different but also constitute two
opposite poles. Capitalism emerged out of the market economy [via technology]
but consolidated at the expense of it by bending the rules of the market.
Market economy is the sphere of routine economic life. It is where exchange
takes place with modest profits. Markets at every level are the vessels of economic life; they seize society from down to the top. Capitalism, on the other hand, is at the top of the pyramid. It is the sphere
of “super” profits. It deals with the aggregates [accumulation] at the top. It
has the power, flexibility, and superior access to information. Even though
capitalists constituted only a minority of society, they accumulated huge
wealth and soon began to control the rest of the society at the top of the
social hierarchy. Where capitalism got stronger, the societies began to be
transformed in the direction of the needs of capitalism. So, capitalist class
and capitalism had an influence much larger than their absolute size and thus
played a role of a “lever” for the transformation of the whole society.” (N. Tolga TUNCER, ESKİŞEHİR OSMANGAZİ ÜNİVERSİTESİ İİBF DERGİSİ, 2011, 6(2), 55‐69)
Four articles on “DIGITALISM” that pinpoint the core
development in technology but miss the real substance of the new world of
ecumenic economies and politics:
Digitalism is a mode of
production and consumption; not a "new era," "fourth
industrial revolution," different "institutional logic,"
"digitalization of economies," or "phase of capitalism." It
is a novel mode, just like feudalism, which was killed by capitalism starting
500 years ago. After 500 years, now, digitalism is killing capitalism.
For Further Information
See: Veysel Batmaz, Digitalism vs. Capitalism Amazon KDP, 2024.
Go
to Amazon and order it!
[1] Gamze Sart, Orkun
Yıldız, “Digitalism and Jobs of the Future”,
Istanbul, 2022
“The Fourth
Industrial Revolution takes the automation used today to a higher level and
uses technology to perform the tasks done by humans. Thus, the undeniable
presence of technology is seen in every field, from biology to production, from
logistics to education. The Fourth
Industrial Revolution is different. First of all, people can constantly
produce new information. People can realize the connection with each other
without any limitation due to internet and mobile devices, which has required
processing, storage, and image capacities because of technological advances by
the fourth industrial revolution. At the same time, thanks to the developing
technology, the relationship between the form of production and the elements of
its processes have also been changing (Sun, 2018). Third and last, the Fourth Industrial Revolution [Digitalism]
will give rise to a new economy form, the “sharing economy.” As a result,
new technologies are being heard of globally, such as intelligent machines, the
Internet of Things, and Neuralink, which aims to implant wireless computer
chips into the brain to cure neurological diseases. … countries’ economies are moving towards technology
and automation, too. However, with the digitalization that came with industry
4.0, humanity will face an unprecedented revolution in the era of computers,
Artificial Intelligence (AI) and robots, and radical changes will occur in
every field. Labour and occupations are among the elements that this change
will transform.”
[2] Metin Gürler, “The effect of
digitalism on the economic growth and foreign trade of creative, Information
and Communication Technology (ICT) and high-tech products in OECD countries”,
Istanbul, 2023
“Digitalism refers to the increasing use and
integration of digital technologies in many areas of the economy, and ICT has
had a profound impact on economic growth and foreign trade in the creative and
high-tech industries. With the emergence and rise of digital technologies, new
sectors such as ecommerce, digital media and social networking, which create
new job opportunities and drive economic growth, have led to the creation of
new business opportunities and sectors. The development of these industries
has also led to the creation of new jobs and increased demand for highly
skilled workers, further spurring economic growth. In addition to all this,
digitalism has facilitated the internationalization of ICT, creative, and
high-tech industries, thanks to the increasing use of digitalism, it has become
easier for companies to collaborate by communicating with partners and
customers from around the world. In this way, businesses have been able to
expand their operations globally, and foreign trade and economic growth have
increased thanks to their entry into new markets. Digitalism
has also had a significant impact on the nature of commerce in the ICT,
creative and high-tech industries. The development of digital technologies has
made the cross-border trade and exchange of digital goods and services easier
and cheaper, resulting in growth in the digital economy. In this way,
businesses were able to sell their products and services globally without
needing to be physically present in each different market, thus creating new
opportunities for small and medium-sized enterprises (SMEs) to participate in
international trade. As a result, the impact of digitalism on the economic
growth and foreign trade of ICT, creative, and high-tech industries has been
quite significant. Digitalism has created new business opportunities and
industries, facilitated the internationalization of businesses, and transformed
the nature of business in these industries. There are four skills which will
be needed in the workplace of the future. These fastest-growing, highest-demand
emerging skill sets are: • Artificial intelligence (AI)/machine learning (ML).
• Cloud computing. • Product management. • Social media (WEF 2023a).”
[3] Mohammad Alkarem
Khalayleha, Dojanah Baderb, Fatima Lahcen, Yachou Aityassinec, Ayat Mohammadd,
Majed Kamel Ali Al-Azzame, Hasan Khaled AL-Awamlehf, and Anber Abraheem, Shlash
Mohammad, “The
effect of digitalism on supply chain flexibility of food industry in Jordan." Amman, 2022
“The
importance of digitalism is to keep pace with the development of global
technology that has changed the ways of thinking and behavior of beneficiaries
and consumers. The digital transformation also accelerates the daily way of
work so that technology is exploited at work to be faster and better, which
reduces work effort and saves time to think about development and innovation.
Digital transformation has become the ultimate way in which organizations work.
Digitalism refers to "the change in people’s communication and behavior in
society as a result of the widespread use of digital technologies" (Gimpel
and Roglinger et al. 2015). Digitalism creates opportunities for organizations
and supply chain practices. Many organizations have started digital
transformation because they have noticed the importance and value of digital
technologies to help them M. A. Khalayleh et al./Uncertain Supply Chain
Management 10 (2022) 1551 in their business performance and development, and
organizations have also increased their administrative support for such
technologies (Bughin et al. 2015).
For full
article:Growing
Science /doi: 10.5267/j.uscm.2022.6.001
Al al-Bayt University, Amman College, Al-Balqa Applied University, the
World Islamic Sciences and Education University, Yarmouk University, Balqa
Applied University, Amman University College, Petra University
[4] Lars Erik Kjekshus and
Bendik Bygstad, “The
Institutional Logic of Digitalism”,Oslo,
2021
“The concept
of institutional logic has proven to be fruitful for understanding
institutional change and in IS research. An important assumption in the
understanding of institutional logic is that interests, values, professional
norms and identities are embedded in the competing institutional logics within
an organisation. Decision behaviours result from how these interests, norms and
identities are enabled or constrained by these institutional logics. The
starting point of our study was the observation of unwanted inertia after
implementing large scale ICT (Information communication technology) systems in
hospitals. How are large scale ICT systems related to organizational
development and management? In this article, we show how ICT in organisations
could be seen as an institutional logic in itself. We suggest digitalism as a term for a new institutional logic, as opposed to other, more well-known logics
in organizations, such as managerialism and professionalism. Applying an
institutional logic way of understanding ICT allows us to unfold a pattern and
to explain the impact of change and stability that ICT has on organisations. To
develop our argument, we combine organisational change research and
institutional theory with information system research on enterprise
architecture, and large-scale ICT systems. The institutional perspective
unfolds the institutional features of large-scale ICT and contributes to the
explanation of strategies, which encompass organisational change and development,
in a dialectic manner of both deterministic and voluntaristic perspective.
Digitalism represents
a new way of understanding organisational development and adaptation and it
challenges the mainstream understanding of organisational behaviour as well as
the established IS literature. Our research aim is to analyse the implementation of
ICT systems in healthcare organisations according to this theoretical
framework. In the last part of the article, we give a discussion of the impact
of different blends of institutional logics and why it is useful to understand
ICT as an institutional logic in itself. The practical result of ignoring
digitalism and instead only seeing ICT as a tool is unwanted inertia and
organisational dysfunctionalities. We illustrate our arguments with examples
from a case of ICT implementation at a large Norwegian hospital where
digitalism was not acknowledged.
Digitalism represents
a new set of regulations, values, integrations and perspectives on the
co-ordination of organisations. Introducing large-scale systems, such as DIPS, brings
digitalism into healthcare organisations. Does digitalism apply outside the
healthcare field and in smaller organisations? We believe that the answer is
yes, but this should be investigated by further research.”
Department of Sociology and Human Geography, Faculty of Social Sciences,
and Department of Informatics, Faculty of Mathematics and Natural Sciences,
University of Oslo
Did Daron Acemoğlu reply to me,
plagiarize from me, or copy me? Which one happened at the UBS Speech in Zurich,
February 2025?
Above you saw that I have
included two screenshots of his video and the cover of my book Digitalism
vs. Capitalism, printed by Amazon KDP on July 28, 2024, which has a
part criticizing Daron Acemoğlu’s approach to institutions and technology,
beside the Harari, Suleyman, the Economist, and Varoufakis approaches. Is DA following me?
Daron Acemoğlu gave a speech
about technology and said that whoever controls the technology controls the
world. Not exactly he mentioned technology in this sentence, per se, but he
said, "Whoever controls Artificial General Intelligence (AGI), controls the
world." His whole speech was devoted to technological progress and how it made
the world miserable, defending capitalists who tried much to spread out the
prosperity despite technology. Some highlights of his speech were groundless
and an imitation of what I had written in my book Digitalism vs. Capitalism and
very opposite to what he has theorized before, which gave him the Swiss Central
Bank’s Nobel Prize.
He is fond of banks. The UBS
Center for Economics in Society, or UBS Center in short, is an Associated
Institute at the Department of Economics of the University of Zurich. It was
established in 2012, enabled by a founding donation by UBS, which the bank made
on the occasion of its 150th anniversary. In view of the generous donation, the
university named the UBS Center after its benefactor.
In his speech, he did not
mention INSTITUTIONS as the controlling agents to prosperity; instead, POWER
was. CHOICE seems to have lost its aura in Daron Acemoğlu's paradigm... All of
a sudden he became a technological determinist, as a recent Nobelist. Alfred
was one of the technologists, wasn't he?
Acemoğlu was announced by UBS
as such: “Artificial intelligence has lately been reshaping nearly every sector
of the economy, raising profound questions about the future of work, wealth,
and power. Will these advancements enhance the intelligence and performance of
human beings, or will they deepen inequality and keep on establishing power
among a privileged few? In his lecture at UZH, Nobel laureate Daron Acemoğlu
highlights the necessity of implementing suitable AI regulations to benefit
society.”
First of all, technology does
not enhance “the deepening inequality or keep on establishing power for a
privileged few.” These are done by capitalists (or kingdoms, empires, states),
not by technology. Secondly, technology is the extension of human intelligence
towards space and time; it is not controlled by the capitalists, states, or
hegemonic classes. Thirdly, the power classes use technology and reproduce it
as they see appropriate to exert their power. Hammer and Abacus were two of the first artificial intelligences of HOMOFABER. No wonder why China produced
Deepseek, which might end capitalism into a deepsheet.
All human gatherings took a
radical phase after capitalism. This industrial revolution (technology),
starting around the 1500s with mechanical textile spinning wheels and steam
engines and ending today with artificial intelligence, marks the beginning of
digitalism. Capitalism is now withering away with intrinsic ways of production
sealed with digital outputs and changing the capitalist commodity/“exchange
value” into "use value of goods and services." In digitalism, there
are and will be abundant goods and services that only have use value and no
exchange value. Also, accumulation of capital which enables exploitation from surplus value is now phasing away. This is the end of the "capitalist mode of
production." We are entering the "digital mode of production and
consumption." This is why the main reason I equate the recent revolution
as the digital hunter-gatherer revolution. History always repeats itself in a
higher form, technology.
(See: Karl Marx, Value, Price, and Profit, Ed. Eleanor Marx Aveling, International Publishers 1974, and Karl Marx, Wage, Labour, and Capital, Intro: F. Engels, International Publishers 1973.)
How unimaginable this
is! TIKTOK is an online platform, and Trump is the President of the United
States of America. Why this negotiation happens cannot be attributed to Trump’s
entrepreneurship. Trump is effectively transferring his energies to bury the old
order and establish a new one, Novus ordo seclorum, as stated on
the dollar bill. Because capitalism is being killed by digitalism. So that he
is becoming the first graveyard digger of capitalism. In the meantime, Trump is
opening up new holes in the capitalist soil as a new mode of production
flourishes with new technology.
Actually, this is an
old story. The dollar has two mottos that are directly related to the writings
of the Roman poet Virgil. The first phrase, Annuit Coeptis, is
situated directly over the eye of the capitalist pyramid. It is derived from
the Latin “annuo,” which means “to nod” or "to approve," and
“coeptum,” which means “undertakings.” Musk must be aware of this as
well…Located directly beneath the pyramid of capitalism on the dollar, Novus
ordo seclorum literally means “New world for the ages.” Dollar's
triumph was of the last triumph of capitalism, which swept away the old New
world for the decades, as Pax Romana and Feudalism, during which the USA's most
important export were wars and ironically, was called “Pax Americano.” Trump's
deal with the TIKTOK is now starting another Novus ordo seclorum, in
which capitalism is nodding to approve the undertakings of digital technology.
The Founding Fathers saw this development of history; why are the American
academics not seeing?
All is written on the
dollar. This is why the so-called most powerful president of the world is now
at the bargaining table with a digital company. If he bans TIKTOK, there will
be another TICTOC; if he gets the half of TIKTOK, then he has started the
burial of capitalism. Digitalism is coming...
In one of my lectures, “Digital Communication,” I always give examples of the three farsighted contributors to technological development way behind the actual innovations that took place. Notoriously, Jules Verne and Alan Turing were the pioneers of this genre of literature and science; Octave Uzanne, Vannevar Bush, and Cahit Arf were more accurate in what they had thought as will becoming reality. To me, technology has its own consciousness embedded in homofaber, free from all social, economic, and political occurrences. Technology is the determinant of humanity. Uzanne, Bush, and Arf are the visioneers.
The end of the nineteenth century is still widely referred to as the fin de siècle, a French term that evokes great, looming cultural, social, and technological changes. According to at least one French mind active at the time, among those changes would be a fin deslivres as humanity then knew them. “I do not believe (and the progress of electricity and modern mechanism forbids me to believe) that Gutenberg’s invention can do otherwise than sooner or later fall into desuetude,” says the character at the center of the 1894 story “The End of Books.” “Printing, which since 1436 has reigned despotically over the mind of man, is, in my opinion, threatened with death by the various devices for registering sound which have lately been invented, and which little by little will go on to perfection.”
First published in an issue of Scribner’s Magazine (viewable at the Internet Archive or this web page), “The End of Books” relates a conversation among a group of men belonging to various disciplines, all of them fired up to speculate on the future after hearing it proclaimed at London’s Royal Institute that the end of the world was “mathematically certain to occur in precisely ten million years.” The participant foretelling the end of books is, somewhat ironically, called the Bibliophile; but then, the story’s author Octave Uzanne was famous for just such enthusiasms himself. Believing that “the success of everything which will favor and encourage the indolence and selfishness of men,” the Bibliophile asserts that sound recording will put an end to print just as “the elevator has done away with the toilsome climbing of stairs.”
These 130 or so years later, anyone who’s been to Paris knows that the elevator has yet to finish that job, but much of what the Bibliophile predicts has indeed come true in the form of audiobooks. “Certain Narrators will be sought out for their fine address, their contagious sympathy, their thrilling warmth, and the perfect accuracy, the fine punctuation of their voice,” he says. “Authors who are not sensitive to vocal harmonies, or who lack the flexibility of voice necessary to a fine utterance, will avail themselves of the services of hired actors or singers to warehouse their work in the accommodating cylinder.” We may no longer use cylinders, but Uzanne’s description of a “pocket apparatus” that can be “kept in a simple opera-glass case” will surely remind us of the Walkman, the iPod, or any other portable audio device we’ve used.
All this should also bring to mind another twenty-first century phenomenon: podcasts. “At home, walking, sightseeing,” says the Bibliophile, “fortunate hearers will experience the ineffable delight of reconciling hygiene with instruction; of nourishing their minds while exercising their muscles.” This will also transform journalism, for “in all newspaper offices there will be Speaking Halls where the editors will record in a clear voice the news received by telephonic despatch.” But how to satisfy man’s addiction to the image, well in evidence even then? “Upon large white screens in our own homes,” a “kinetograph” (which we today would call a television) will project scenes fictional and factual involving “famous men, criminals, beautiful women. It will not be art, it is true, but at least it will be life.” Yet however striking his prescience in other respects, the Bibliophile didn’t know – though Uzanne may have — that books would persist through it all.
(2) Vannevar Bush
Vannevar Bush’s Visionary Essay: “As We May Think”
The essay, written in 1945 by Vannevar Bush, was published in the journal Atlantic Monthly. It introduced science to a new way of thinking, making it clear that for years, all inventions had only taken the extent of humankind’s physical powers into consideration, rather than the power of their minds.
Vannevar Bush
Vannevar Bush was an American engineer, born on the 11th of March 1890. He was the Vice President of MIT, and held many patents of analogue computers, as well as being the inventor of the differential analyzer.
From 1945, he directed the United States Office of Scientific Research and Development. This organization performed a lot of military research, such as developing radar systems, as well as overseeing the Manhattan Project which allowed the production of the first nuclear bombs.
Written in 1945, the essay titled “As We May Think” was published in the journal Atlantic Monthly in the July, as well as a condensed version being republished in the September of the same year, practically straddling the Hiroshima and Nagasaki bombings.
Bush expressed his concern about scientific efforts moving in the direction of mass destruction, rather than understanding. He explained the desire and the need of a type of “collective memory machine”, with an enormous potential to make knowledge more accessible for all, as well as answering the question:
“How can technology contribute to the wellbeing of humanity?”
The essay bases its reasoning on a premise: Human knowledge is a set of connected knowledge, and has a universal dimension that cannot be limited to the life of an individual.
Knowledge is the result of a continuous process, built due to a fruitful collaboration among scientists, and includes the wealth of all human knowledge, where access to scientific information is a necessary condition for the growth of mankind.
In fact, the essay did not just pose a mere technical question, the subject was mostly a strong philosophical and political reflection upon how knowledge is produced and communicated.
The Concept of Memex
Bush, in the sixth section of the eight-part essay, presents the “memex”, a sort of extension of human memory in the form of a mechanical desk containing a microfilm archive inside. This would have been the storage method for books, registers, and documents, in order to subsequently be able to reproduce them and associate them with each other.
The memex takes its name from the words “memory extender”, and its purpose was considered essential by Bush, saying that
“The human mind […] operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain.”
In fact, the memex helps a researcher to remember things more quickly, in addition to being able to quickly see their archives of information by typing the code of the register and the book, in a way that it can be called and immediately viewed on one of its screens.
The interface was simple, consisting of buttons and simple levers.
Pushing the lever to the right moves to the next page of a document, and pushing it even further to the right causes the memex to scroll 10 pages at a time. Logically, moving the lever to the left has the same functionality but in the opposite direction, while a special button allows a complete shift, repositioning to the first page of the text.
Moreover, the user of the memex would have been able to have opened several books at the same time, perhaps on the same subject, creating a path and various links between the information. This would in fact define a new book, as well as including note-taking thanks to dry photography technology.
Bush, with this essay, strongly anticipates what we will see a few decades later on the world wide web. He also revolutionises the concept of the workspace, introducing page-by-page navigation via links and connections.
This essay inspired the subsequent innovation of the “oN-line-System” in a fundamental way, the operating system created by Douglas Engelbart, presented in the 1968 “Mother of all demos” which we discussed in a previous article on Red Hot Cyber.
This article is written by Author: Massimiliano BrolliOriginal Publication Date: 15/11/2021Translator: Tara Lie: Cyber Security analyst from Perth, Western Australia, focused on governance, risk quantification and compliance. Graduate of cyber security and pure mathematics, with a second-major in Italian Studies. Tara has earned a Master's degree in Cyber Security, and has a great passion for quantum-preparedness.
(3) Cahit Arf
Cahit Arf
Can machines think and how can they think? [1]
Cahit Arf 1958/59, Turkish mathematician
Can machines think and how? If someone told about that idea approximately 60 years ago, most probably only a few people would believe that, and it did. But today we all know the fact that machines can think. Although it might be a frightening thought for some people, at the same time for some it is the salvation, “learning” of machines continue to grow without slowing down day by day, even while I am writing this article.
One of the first papers, which declares the fact that machines can think, is written by Alan. M. Turing, who is the encoder of “Enigma” [2]. For the first time, he proposed that the so-called two unconnected words such as “machine” and “think”, which might look like they are far away from each other, can have meaning together and he told about “the imitation game”. Basically, a machine can take the place of a person to send a message, and someone should decide whether it is a machine or a person, somehow. Even today it is hard to imagine those mindblowing concepts, they proposed and showed them 60–70 years ago.
Enigma M4, used by the German Kriegsmarine during WWII
Introduction to Machine Learning
Now we know that machines can think. The second question was “how”. So how can machines think? In this regard, the “machine learning” concept is introduced. Machine learning is a subfield of artificial intelligence (AI) and it is a concept that a machine can decide it’s on behalf by means of some steps. In order to apply Machine Learning concept, there are three main points:
·Data
·Pattern
·No mathematical formulation
Data:
This is the main part of the machine learning. How do we learn in real life? We experience, right? Actually, we collect data. With a simple example, we might have learned that we should not wear a winter jacket in the summer by experiencing or somebody might have given us that knowledge. The same concept is valid for machines as well. If you don’t have data, then machine learning can not be applied. From simple to complicated examples, no machine learning algorithm can be applied if there is no data. Actually, this is one of the reasons why data science gained surplus importance particularly in the last decade, since it is the main and fundamental source of machine learning.
No mathematical formulation:
Let’s start with such an idea. Let’s say, I ask a question and I want a solution not only for one case but in general. Basically, what should be done is to find a function or a formulation that describes all the cases of the question. It is possible to find such a solution, right? In machine learning, one can not find such a function. This is the concept of the “target function” in machine learning. The target function is on the top of everything and it is unknown but the aim is to find the best function which is the closest to “unknow target function”. If there is a mathematical formulation of the problem at the very beginning, it does not make sense to apply machine learning algorithms. It might work, which means a solution can be found, but it is not going to worth it. As a result, the main aim of machine learning is to find the best approximation (hypothesis) since the target function will be always an unknown.
Pattern:
The last point is that there should be a pattern in the data. What does it mean? If the data is completely random, machine learning can not learn “efficiently”. Just imagine, you are trying to teach a child how to speak German by talking in all languages randomly. The child can speak, this is the learning part, but she/he can not speak German in a correct way. Thus, in the data, there should be a pattern so that the machine can learn efficiently.
Well, we told about three main parts of machine learning. Which one is the most important? Is there any guess? You guessed it, right. The answer is data. You can learn without a pattern and if there is a mathematical formulation of the problem. It might not be the best solution, but still you can learn. But without data, machine learning can not be performed.
Main Types of Machine Learning
supervised vs unsupervised learning
Let’s explain machine learning types with an easy example in order to understand the concept.
Assume that you have a business that makes profits by selling cars in a small town close to the sea. Every morning before coming to work, you take a walk on the seashore and drink some coffee. Then you watch… Stop stop! We are losing our path. Let’s say you are selling cars. You sold many cars beforehand and keep the information of your customers, such as their age, salary, job, where they live, how often they visit and buy cars and maybe they did not buy any car and so on. In this way, you have a database consisting of the customers, which labels them whether she/he bought a car or not. And a new customer already came! So the question is that the new customer is going to buy a new car or not? We might guess whether the new customer will buy a new car or not with a probability by applying a machine learning model since we have the data of the previous customers. If the features of the new customer are mostly matching with the public profile of the customers who bought cars, then she/he most probably is going to buy the car, in vice versa, she/he is not. This is called “supervised learning”. You train your model according to previous experiences and labels. Therefore, the machine learning model knows the profile of the people who are prone to buy a new car or not.
Let’s talk about another scenario. It is the same business. However, in this case, you just started your business, so you don’t have the database in such a way that you don’t have the information about the customers who bought cars or not. You only have their features, such as their age, salary, job, where they live and so on, but you don’t know whether they buy any car or not. What can you do at this point? You can cluster the customers according to their features to find hidden patterns or groupings in the data. This is called “unsupervised learning”. You don’t have labels so that they can supervise you, but you can find the pattern by clustering the data or summarizing the distribution of the data.
Well, we know the difference between “supervised ” and “unsupervised” learning. The final main machine learning type is “reinforcement learning”. Let’s try to explain reinforcement learning with the same example. Similarly, you don’t know whether the customers bought cars or not, but their features are known, like unsupervised learning. In this case, you send a “salesperson”, called “agent” in the machine learning terminology, to find the potential people who can buy a new car. The agent picks up a customer that might buy a new car and come to you. You talk to the customer and say to the agent: “yes it is the right customer or no it is the wrong customer”. Basically, you reward or punish the agent so that the agent can learn to choose the right customer. This is the concept of “reinforcement learning”. The agent will learn according to rewarding or punishing.
There are other machine learning types as well, such as semi-supervised, inductive learning, ensemble learning, and so on. But only there main types of machine learning models are investigated in this article.
Conclusion
As a conclusion, over the past years, humankind gained a powerful momentum to go further regarding artificial intelligence and machine learning. Some people think these trendy topics will bring the end, while others think they are salvation. In my opinion, there is a thin line between end and salvation. Therefore we have to be ready for the new era, not only the machines we must “learn“, too.
Alan Turing, who first came up with the ideas of artificial intelligence and thinking machines, and Cahit Arf, who talked about these machines for the first time in Turkey’s history, analyze thinking machines.
The question of whether machines can think or not, although it has been implicitly expressed in the historical process, was first explicitly addressed by Alan M. Turing in 1950. Raising the question at a public conference in 1958, Arf comes up with machine design examples that we can be convinced that he can think of, as well as making determinations about the possibility of machines having certain features that can be considered as an indicator of what a person thinks. According to him, machines can be designed with mental abilities to perform logical and analytical operations with thinking styles based on using language, calculating, making analogies, and eliminating, and there are similarities between the way the human brain works and the way machines work. However, Arf sees the main difference between humans and machines in the difficulty of bringing the aesthetic consciousness of humans to machines.
Alan M. Turing proposes in his paper Computing Machinery and Intelligence to consider the question “Can machines think?”. In his approach, he replaces the question with another which is “expressed in relatively unambiguous words” to avoid the vagueness of the ordinary concepts of “machine” and “think”. He attempts to describe the new form of the problem in terms of a game (“imitation game”). Nowadays, the game is known as the Turing test.
Mathematician, Computer Scientist - Alan Mathison Turing
According to Turing, the original question “Can machine think?” can be replaced by a question, which is similar to the following: Is the interrogator able to distinguish between the machine and the person? In explicit terms, the criterion determining that a machine can think is the interrogator’s inability to tell the difference.
Arf presented the concrete indicator of thought as different reactions to different effects and emphasized that people react with different words to different words spoken to them or to different effects they are exposed to and that these reactions should be taken as proof of their thinking. Thus, Arf establishes a link between thought and behavior by referring to the effect and reaction relationship and deals with this link depending on the language phenomenon.
The first thing to note is that Arf does not touch upon the subject of “learning” for thinking machines. According to Turing, on the other hand, the main point is what he calls learning, and according to Turing, it is necessary to turn to this learning business for thinking machines to be possible. One similar way of thinking, both Turing and Arf, draws attention to and emphasizes the importance of machine memories.
Another similar inquiry is to point out that machines are designed to solve a specific problem. While Arf explains this subject by making an analogy between the human brain and the machine; Turing analyzes this issue in the context of Lovelace’s thesis and does not accept Lovelace’s idea, rejecting it with the seed/plant analogy.
By looking at the correspondence between the machine that does not see each other and the human, what Turing means by thinking machine emphasizes that if the human cannot understand that the other person is a machine, that machine will be a thinking machine, Arf does not put forward such a condition. Since it is the uncertainty principle, it says that it is possible if these subatomic parts direct machines that think like this.
Finally, Turing has full faith in thinking machines and has full hope that these machines will emerge by the end of his century. Arf, on the other hand, stands at a more pessimistic point and thinks that it may never be done for many years.
References
Turing, Alan M. (1950). Computing Machinery and Intelligence, Mind, 59/433–460.
Arf, C. (1959).Makine Düşünebilir Mi ve Nasıl Düşünebilir?, Atatürk Üniversitesi - Üniversite Çalışmalarını Muhite Yayma ve Halk Eğitimi Yayınları Konferanslar Serisi No: 1, Erzurum, s. 91–103
Sarı, F. (2021). Cahit Arf’in “Makine Düşünebilir mi ve Nasıl Düşünebilir?” Adlı Makalesi Üzerine Bir Çalışma . TRT Akademi , 6 (13) , 812–833.