Groucho Marxism

Questions and answers on socialism, Marxism, and related topics

  • Since April 2023 there has been an active civil war in Sudan between two rival factions of the country’s military government: the internationally recognized government controlled by the Sudanese Armed Forces (SAF), led by General Abdel Fattah al-Burhan; and the paramilitary Rapid Support Forces (RSF), led by General Hemedti, who leads the broader Janjaweed coalition. Fighting has been largely concentrated in the capital, Khartoum, where the conflict began, and in the Darfur region in the west of the country. Such conflicts are not exactly rare in Sudan. Since gaining independence in 1956, Sudan has endured chronic instability marked by 20 coup attempts, prolonged military rule, two devastating civil wars, and a genocide in Darfur.

    The conflicts that have occurred in Sudan since 1956 have largely been between the relatively wealthy, Muslim north and the less developed, predominantly Christian or animist south. These culminated with the southern part of the country breaking away and becoming a separate country, South Sudan, in 2011. But anyone who thought that the secession of South Sudan would put an end to the conflicts would have been sorely mistaken. Along with tensions between the north and south of the country there are also longstanding tensions between the western Darfur region and the east of the country. The fault line in this case is not religion, but ethnicity. The east is populated mainly by Arabs or Arabized Africans, whereas Darfur is populated by indigenous Fur, Zaghawa, and Masalit ethnic groups.

    These tensions came to a head with the war in Darfur, which lasted for 17 years from 2003 to 2020. One side of the conflict was mainly composed of the Sudanese armed forces, police, and the Janjaweed coalition, whose members are mostly recruited among Arabized Africans. The other side was made up of rebel groups, notably the Sudan Liberation Movement and the Justice and Equality Movement, recruited primarily from the non-Arab indigenous ethnic groups in the region. The origins of this conflict go back to the 11th century, when Arab migrations into the Nile valley resulted in the people there becoming heavily Arabized, whilst the people of Darfur remained more faithful to native Sudanese cultures. The war in Darfur led to a genocide of the indigenous population by Janjaweed militias.

    Against this backdrop it may be tempting to write Sudan off as a failed state that will always be mired in conflict as a result of its internal ethno-religious tensions. However, there is a lot more to it than that. The current war is tied to global financial interests, with sponsors of opposing parties profiting from the chaos. Sudan has effectively become the stage for one of the world’s most devastating proxy wars. At the heart of the crisis lies the struggle for profit, power, and influence. The UAE has been deliberately working to destabilize Sudan for the sake of resource extraction, by sponsoring the RSF rebels. The UAE’s dominance in the illicit gold trade is a key feature of its influence in Africa, and the RSF’s control of gold mining operations in Sudan makes it a valuable proxy.

    Israel also has role in the conflict, which revolves around advancing normalization agreements and limiting Hamas’s influence. Whilst the UAE has focused on the RSF, Israel has cultivated ties with the SAF. Under the SAF, Sudan has become an ally of Israel and has agreed to join the Abraham Accords, and has also frozen Hamas assets within the country. It may seem strange that the UAE and Israel are supporting different sides in the conflict, particularly as the UAE is also a signatory of the Abraham Accords. But both countries seek a weakened Sudanese state with limited sovereignty, and as such it makes sense for them to back opposing sides. As far as Israel and the UAE are concerned, the longer the conflict goes on, the better.

    As ever, it is ordinary people who are suffering most from this game of international power politics. The humanitarian impact of the war is difficult to overstate. A senior official from the United Nations World Food Program warned in April this year that Sudan is facing the world’s worst humanitarian crisis, with nearly 25 million people experiencing extreme hunger, over 12 million displaced, and at least 20,000 confirmed dead. Ultimately, the cause of all this needless suffering is the global capitalist system under which we are all forced to live, which prioritizes profit, power, and influence over human life. Proxy wars such as the one currently going on in Sudan will continue to occur until we get rid of capitalism and replace it with a system that prioritizes human well-being.

  • A language family is a group of languages related through descent from a common ancestor, referred to as the proto-language. The Indo-European languages form a particularly large language family comprising of languages native to most of Europe, the Iranian plateau, and the northern Indian subcontinent. The proto-language of the Indo-European languages is referred to as Proto-Indo-European, or PIE. This language is not directly attested but has been reconstructed by linguists based on the attested daughter languages. Languages of the Indo-European family are classified as either ‘centum’ or ‘satem’ according to how the velar consonants of PIE – sounds produced using back part of the tongue and the roof of the mouth, or velum – subsequently developed.

    PIE is usually reconstructed with three types of velar consonant: palatovelars, pronounced using the front of the velum; plain velars, pronounced using the back of the velum; and labiovelars, pronounced with concomitant lip-rounding. This traditional model implies that in the centum languages – Anatolian, Celtic, Germanic, Greek, Italic, and Tocharian – the palatovelars merged with the plain velars, whereas in the satem languages – Albanian, Armenian, Balto-Slavic and Indo-Iranian – the plain velars merged with the labiovelars. However the existence of all three types of velar has long been a source of controversy, and many scholars, going all the way back to Antoine Meillet in 1894, have suspected that there were in fact only two types of velar consonant in PIE.

    There are several arguments in favour of the two-velar hypothesis. First, the plain velars are rarer than the other two types, are almost entirely absent from affixes, and appear most often in certain phonological environments. This suggests they were in complementary distribution with the palatovelars or labiovelars (or both). Second, it is extremely rare cross-linguistically for palatovelars to move backwards in the mouth, with the opposite process, known as palatalization, being much more common. Yet the traditional reconstruction implies that this ‘depalatalization’ occurred on six separate occasions: in Anatolian, Tocharian, Greek, Germanic, Italic, and Celtic.

    Third, in the satem languages, where the palatovelars supposedly remained intact, other palatalizations also occurred, implying that palatalization was a general trend. This suggests it is not necessary to reconstruct a separate palatovelar series for PIE. And fourth, alternations between palatovelars and plain velars and are common across different satem languages, with the same root appearing with a palatovelar in some languages and a plain velar in others, or even a palatovelar and a plain velar within the same language. These alternations, referred to as ‘gutturalwechsel’, are consistent with the analogical generalization of one velar type in an originally alternating paradigm but difficult to explain otherwise.

    A potential solution to some of these problems is to retain the three-velar system but change the phonetic interpretation of the different types of velar. This approach involves reinterpreting the palatovelars as plain velars, and the plain velars as uvular consonants, articulated further back with the tongue against or near the uvula. This gets around the problems of the plain velars being rarer than the other two types and their restricted distribution, as this would be expected if they were really uvular consonants. It also gets around the problem of the unlikely parallel depalatalization in the different centum languages. However, it still doesn’t explain the common alternations between palatovelars (or plain velars) and plain velars (or uvulars).

    One position where these alternations are particularly common is after so-called ‘mobile *s’. This refers to the phenomenon whereby a PIE root appears to begin with an *s which is sometimes but not always present (the * just means we are dealing with a reconstructed form). There are many examples of roots with a mobile *s followed by a velar where the root yielded reflexes in the Satem languages beginning either with *s plus a plan velar, corresponding to the form with the *s, or with a palatovelar, corresponding to the form without the *s. This is consistent with the hypothesis that there were only two velar types in PIE, a plain velar and a labiovelar, with the ‘palatovelars’ being the result of palatalization of plain velars in the satem languages which was blocked after *s.

    In his 1973 PhD thesis, the linguist Lars Steensland demonstrated a complimentary distribution between the three types of velar. He first showed that in word-initial position, only plain velars occur after *s, except before *i where only palatovelars occur, which backs up the argument set out in the paragraph above. Steensland then showed that in word-initial position not following *s, palatovelars occur everywhere apart from before *r, and *s; plain velars occur everywhere apart from before *e, *i, and what we would now call *H₁; and labiovelars only occur before *e, *i, *r, and what we would now call *H₁. Thus, in word-initial position, there isn’t a single environment where all three types of velar occur.

    One explanation for this distribution runs as follows. PIE originally had two types of velar: a plain velar and a labiovelar. In the satem languages, plain velars became palatalized in word-initial position in all environments apart from (1) after *s, unless followed by *i; and (2) before *r and *s. In the centum languages, labiovelars were delabialized in word-initial position in all environments apart from before *e, *i, *r, and *H₁. The problem with this explanation is that it implies that the latter development occurred separately in the centum languages on six separate occasions. This seems rather implausible and removes one of the key arguments in favour of the two-velar reconstruction. (The satem languages were contiguous so could plausibly have undergone such changes together.)

    Nonetheless, I believe Steensland was on the right lines with his approach – mainly because it involved looking at the actual data. We probably just need to modify his theory slightly to allow for different developments in the centum languages. I will take this up in a future blog post.

  • Scientific socialism is a term popularized by Karl Marx and Friedrich Engels to describe their socio-political approach, which aims to apply the scientific method to the analysis of society. It contrasts with utopian socialism by basing itself upon material conditions instead of conceptions and ideas. Thus, scientific socialism takes a materialist approach to the analysis of society whereas utopian socialism takes an idealist approach. The distinction between the two provides a useful way to answer the question of whether socialism and Marxism are synonymous; the answer is clearly not, as utopian socialists are socialists but not Marxists. The modern meaning of the term scientific socialism is based almost entirely on Engels’s 1880 book Socialism, Utopian and Scientific.

    Whether Marx and Engels were truly scientists in the widely understood sense of the term is open to debate. Personally, though, I believe they were. Their dialectical materialist approach foreshadowed many ideas in the modern scientific field that we call complex systems, as I outlined in a previous blog post. For example, the dialectical materialist concept of conversion of quantity into quality is almost identical to the complex systems concept of a critical transition or bifurcation point. And the dialectical materialist idea that all things are interconnected and interdependent, and that a system can only be understood in relation to its environment and other systems, is also considered central to complex systems theory.

    This is all the more remarkable when you consider this field only began to be developed in the 1940s, 80 years after Marx wrote volume 1 of Das Kapital. The view of society as a complex system has important consequences for assessing the likelihood of critical transitions such as revolutions. One of the key insights from complex systems theory is that it is almost impossible to accurately predict the behaviour of large, interconnected systems with many feedback loops. Anyone who says that a revolution is impossible either doesn’t understand complex systems or is talking out of their backside. This isn’t to say that a revolution will definitely occur any time soon; just that nobody knows for sure if and when one will occur.

    Another reason I think it is legitimate to call Marx a scientist is that he made testable predictions. For example, he predicted that the rate of profit would fall over time, which has since been validated empirically, as I explained in a previous blog post. He also predicted that capitalism would be prone to crises, which has been emphatically borne out by subsequent events. His most famous prediction, that capitalism would eventually collapse and give way to socialism, has not come to pass (yet). But part of the reason for that is that this prediction became self-refuting. The idea that capitalism might collapse has led the ruling to class do everything it can to prop up the system and stop that happening, from making concessions to workers in the post-war period to bailing out bankers after the 2008 crash.

    I don’t think Marx gets the credit he deserves given how accurate his predictions have turned out to be. The reason is that, largely through the magic of propaganda, Marx has come to be seen by many as the architect of several crimes against humanity, such as the Russian famine of the 1930s and the Chinese famine of the 1950s. But these events occurred decades after Marx’s death and blaming Marx for them makes about as much sense as blaming Jesus for the crusades. Just as there is nothing in the teachings of Jesus that would lead anyone to commit crimes against humanity, neither is there anything in the writings of Marx that would lead anyone to commit such crimes. Even so, the name Marx now carries negative connotations for many people.

    Another reason people tend to discount or ignore Marx’s insights is that they assume they are no longer relevant. But his insights are just as relevant today as they were when he was making them 150 years ago. They will always be relevant as long as we live under capitalism, and will only stop being relevant once we get rid of capitalism for good and replace it with socialism. Nonetheless, given how loaded the terms ‘Marx’ and ‘Marxism’ have become, perhaps there would be some value in us Marxists re-branding ourselves as ‘scientific socialists’. As we have seen, all Marx and his long-time collaborator Engels were really trying to do was apply a scientific approach to the analysis of society – and who would argue with that?

  • Calculus is the mathematical study of change. It has two major branches: differential calculus and integral calculus. The former concerns rates of change and the slopes of curves, whereas the latter concerns accumulation of quantities and areas under or between curves. Calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz, and was instrumental in enabling Newton to formulate his laws of motion. Despite its obvious success in describing the motion of objects, however, differential calculus in particular came in for some criticism. Perhaps the most prominent critic was the 18th century Irish philosopher and bishop George Berkeley, who critiqued differential calculus in his 1734 pamphlet The Analyst.

    Berkeley’s argument was that the method relied on infinitesimals, which were treated as quantities that are simultaneously zero and non-zero. To see this, recall that to take the derivative of a function f() defined on the real numbers, we first calculate the quantity [f(x+h)-f(x)]/h, then evaluate the result when h = 0. The quantity h is considered an infinitesimal of the type that Berkeley was referring to. In the first step this cannot be zero, as we divide by h, and division by zero is not allowed; but then in the second step we set h equal to zero! You can see where Berkeley was coming from with his critique. However, it is now generally accepted that Berkeley’s criticism was answered with the rigorous development of limits in the 19th century.

    The solution mathematicians came up with was to define the derivative as the limit of [f(x+h)-f(x)]/h as h ‘tends to zero’. This was given a precise definition using something called the ‘epsilon-delta’ definition of a limit, which I won’t go into here. Suffice it to say that the epsilon-delta definition relies on the existence of infinite sets. In a previous blog post I have criticized the assumption that infinite sets exist – the so-called ‘axiom of infinity – on materialist grounds. A critic of this position might argue that removing infinite sets from mathematics would remove our ability to rigorously define the derivative of a function using limits; and they would be right. But there is an alternative formulation of calculus which obviates the need for such a definition altogether.

    Discrete calculus is an analogue of calculus for functions defined on discrete domains. In the remainder of this blog post I will go through some of the basics. Consider a function f() defined on the finite domain X = {0,1,…,N}. The discrete derivative of f() is defined by Df(x) = f(x+1)-f(x). The discrete derivative is linear: D(af+bg) = aDf + bDg for all integer constants a,b and functions f,g defined on X. We can derive a discrete analogue of the product rule: D(fg)(x) = f(x+1)Dg(x)+Df(x)g(x). We can also derive a discrete analogue of the quotient rule: D(f/g)(x) = [Df(x)g(x)-f(x)Dg(x)]/[g(x)g(x+1)]. Recall that in standard (continuous) calculus, d(xn)/dx = nxn-1. In discrete calculus we have the analogous rule given by Dxn = nxn-1, where xn is the ‘falling power’: xn = x(x-1)…(x-n+1).

    Euler’s constant, e, is the number with the property that d(ex)/dx = ex. The discrete analogue of e is 2, as D(2x) = 2x+1-2x = 2x. The discrete integral is simply a sum: ∑a→bf(x) = f(a)+f(a+1)+…+f(b-1); note that the sum does not include f(b). The fundamental theorem of discrete calculus immediately follows from the definition of the discrete integral: ∑a→bDf(x) = f(b)-f(a). It is straightforward to determine from the fundamental theorem that ∑a→bxn = (bn+1-an+1)/(n+1). Note that it follows from the product rule above that Df(x)g(x) = D(fg)(x)-f(x+1)Dg(x). Integrating (summing) both sides between a and b, we obtain a discrete analogue of the integration by parts formula : ∑a→bDf(x)g(x) = f(b)g(b)-f(a)g(a)-∑a→bf(x+1)Dg(x). This formula allows us to do more advanced integrations.

    We can also define discrete second derivatives. The obvious definition involves simply applying the first derivative twice: D2f(x) = D(Df)(x) = f(x+2)-2f(x+1)+f(x). There is an alternative definition though. Letting D+f(x) = f(x+1)-f(x) and Df(x) = f(x)-f(x-1), we can set D2f(x) = D+(Df) = D(D+f) = f(x+1)-2f(x)+f(x-1). This definition has the advantage of being symmetric around x. In continuous calculus, sin() and cos() are functions f() with the property that d2f/dx = -f. To find discrete analogues of these, we must find functions f() such that D2f = -f. That is, we must solve f(x+1)-f(x)+f(x-1) = 0, or f(x+1) = f(x)-f(x-1). Setting f(0) = 0 and f(1) = 1, we get the sequence (0,1,1,0,-1,-1,0,1,…); this is the discrete analogue of sin(). Setting f(0) = 1 and f(1) = 0, we get (1,0,-1,-1,0,1,1,0,…); this is the discrete analogue of cos().

    Let dsin() and dcos() denote these discrete analogues of sin() and cos(). Then from the definitions, we have D+(dsin(x)) = dcos(x) and D+(dcos(x)) = -dsin(x+1). Similarly, we have D(dsin(x)) = dcos(x-1) and D(dcos(x)) = -dsin(x). These are analogous to the relations between sin() and cos() and their derivatives in standard calculus. Thus, we have successfully defined discrete analogues of derivatives (including the product and quotient rules), integrals (including integration by parts), second derivatives, Euler’s constant, and the trigonometric functions sin() and cos(). There are yet more analogous definitions that can be made, but I will leave these for a future blog post.

  • In classical logic, statements or propositions are always either true or false. One of the justifications given for this rule is that contradictions – statements that are both true and false – entail everything. Which is to say that if you allow a single contradiction, you can prove any proposition you like. This feature is known as the principle of explosion and can be easily proved, as follows. Suppose the proposition P is both true and false; then P is true, so (P or Q) is true for any proposition Q; but P is also false, so Q must be true; and since Q was arbitrary, the result is proved. However, there are alternatives to classical logic that allows for the coexistence of contradictory statements without leading to a logical explosion where anything can be proven true. These logics are referred to as ‘paraconsistent’.

    It is clear from the above that in order to avoid the principle of explosion, we must abandon either the principle of ‘disjunction introduction’ – P implies (P or Q) – or the principle of ‘disjunction syllogism’ – if (P or Q) then (not P) implies Q. In practice, we can choose to abandon either or both of these principles if we want to. Some might object to tampering with the laws of logic in this way. Logic is the foundation upon which the whole of mathematics rests, and messing around with this foundation may seem rather foolhardy. But it is important to point out that, much as we are free to choose the axioms of mathematics however we like, we are free to choose the rules of logic however we like as well. Just like mathematics, logic is, at the end of the day, a human invention.

    The view that (some) statements can be both true or false is known as ‘dialetheism’. Dialetheism is not a system of formal logic; it is a thesis about truth that influences the construction of a formal logic. We have already seen that there exist systems of formal logic that allow for contradictions without leading to the principle of explosion. Whether we choose to use one of these systems rather than classical logic depends on our views on dialetheism. One argument in favour of dialetheism is that it resolves well-known paradoxes that involve contradictory statements. The most famous of these is the liar paradox, which is encapsulated by the statement ‘I am lying’. The paradox arises when trying to determine whether this statement is true or not.

    The classical way to solve this problem is to revise the axioms of the logic so that self-contradictory statements are not allowed. Dialetheists, on the other hand, respond to this problem by simply accepting the statement ‘I am lying’ as both true and false. To a dialetheist, therefore, there is no paradox at all. Another argument in favour of dialetheism is that it is more closely aligned with human reasoning and human language. Ambiguous situations may cause humans to affirm both a proposition and its negation. For example, if someone stands in the doorway to a room, it may seem reasonable both to affirm that the person is in the room and to affirm that they are not in the room. Another example arises in statements such as ‘that car is red’, which one person may evaluate as true and another as false.

    One prominent advocate for dialetheism is the British philosopher Graham Priest. According to Priest, there is a close connection between dialetheism and the dialectics of Hegel and Marx. In fact Priest goes so far as to argue that both Hegel and Marx were dialetheists. Priest gives two examples of Marx’s apparent dialetheism. The first concerns the notion of a commodity, which, according to the argument set out in Das Kapital, is both a use-value and an exchange-value. This entails a contradiction as use-values and exchange-values are incommensurate. The second example concerns wage labour. Under capitalism, wage labourers are free to sell their labour power as they choose; yet they are hardly free in any meaningful sense as the alternative is starvation and death.

    The close connection between Marxism and dialetheism is not surprising as one of the things Marx is most famous for is pointing out the contradictions of capitalism. In formal logic, a contradiction is simply a statement which is both true and false. Whether Marx meant the term in this technical sense, or in a more colloquial sense, is open to debate. Regardless, in a recent (2024) paper, Priest shows that paraconsistent logic, which allows for true contradictions, can provide a basis for formalizing the somewhat nebulous concept of the dialectic. Specifically, Priest provides a formal logical model of a dialectical progression, a dynamic concept found in the writings of both Hegel and Marx. I shall return to this model in a future blog post.

  • The ‘self’ is a complex concept that refers to an individual’s unique sense of being, encompassing their thoughts, identity, and consciousness. Although it is difficult to pin down a precise definition of the self, most of us believe we have one. Or that we are one. Actually, which is it? Already we are starting to see how the concept becomes problematic as soon as you begin to examine it. The self has a peculiar property that the more you look for it, the less tangible it seems. Where is this ‘self’ exactly? Most of us think of the self as a little person sitting inside our heads. But this can’t be right, because that little person must also have a self, and where do they sit? Inside the little person’s head? This is just sending us into an infinite regress.

    Another possibility is to define the self to be identical to the body. This doesn’t really work either though, as most people would say that removing a part of their body – say, their leg – would not make them a fundamentally different person. Perhaps instead we can take the self to be identical to the brain. This seems a better definition, as removing a part of someone’s brain can actually turn someone into a different person, at least in the eyes of others. We see this happen when victims of severe head injuries undergo fundamental personality changes. ‘They’re just not in there anymore’ is a common refrain from distraught family members, suggesting that they believe the injury victim’s old self has been either modified or replaced.

    There is a problem with the ‘self = brain’ definition too though. The brain is not static; it is changing constantly through a process called neuroplasticity, whereby it reorganizes its structure, functions, and connections in response to external stimuli. In contrast, the self is usually understood to be a static thing which stays the same throughout a person’s life (barring any serious head injuries). However, the vast majority of neurons in the brain are not replaced, meaning the neurons you are born with are the ones you have for your entire life. Perhaps then we should identify the self not with the brain as a whole, but with neurons within the brain? This doesn’t really work either, as brain function is a result of the interactions between neurons rather than of neurons themselves.

    Whatever part or parts of the brain we try to associate with the self, we will always run into the same difficulty. It is the interactions between different parts of the brain that results in thoughts, identity, and consciousness, rather than the brain’s individual components. Maybe then we should consider the self as an emergent property of the complex system that is the brain. However if we are to take this as our understanding of self then we must abandon the intuitive idea of the self as a fixed entity. Thus we have arrived at an impasse. If we want the self to be a fixed entity, then we must abandon our intuitive idea of what the self is; conversely, if we want to keep this idea, we must abandon the notion of the self as a fixed entity.

    The only way out of this impasse is to accept that the self, as usually understood, is an illusion. This illusion emerges from a collection of different, often conflicting, thoughts, memories, and bodily processes, rather than from a fixed part of the brain. This subjective experience of a solid, independent self is a product of the brain’s storytelling and perception-making, not an objective reality. The brain creates a narrative to make sense of the world, and the self is the main character in this story. It is a fabrication that emerges from the brain’s storytelling powers. There is no single, anatomically located self in the brain, as most of us like to imagine; instead, the feeling of self arises from a complex network of processes.

    In a previous blog post I argued that free will, as usually understood, is also an illusion. The obvious parallel between the argument put forward there and the argument being put forward here is no coincidence. The illusion of self and the illusion of free will are two sides of the same coin. In fact the latter can be seen as a consequence of the former, as the illusion of free will stems from the idea of a fixed self which is in control of our actions. The illusion of self leads also to the notion of ego, whereby we see the world only from our own perspective. In another blog post I pointed out just how harmful this notion can be to our well-being and the well-being of those around us. Understanding that the self is an illusion is the first step in taming the ego, which in turn is the key to a happy life.

  • In linguistics, the term ‘phoneme’ refers to any of the perceptually distinct units of sound in a specified language that distinguish one word from another; for example p, b, d, and t in the English words pad, pat, bad, and bat. Languages vary considerably in the number of phonemes they have, from as few as 9 in the Brazilian indigenous language Pirahã to as many as 141 in the southern African language ǃXũ. It is usually claimed that there are 44 phonemes in English: 24 consonant phonemes and 20 vowel phonemes. However the Hungarian linguist Peter Szigetvári has recently argued – convincingly, in my view – that English has just 6 vowel phonemes, and that the remaining 18 vowel sounds can be considered combinations of these six vowels plus a glide (y, w, or h).

    This suggests that the number of vowel phonemes may have been overestimated in other languages too. A recent (2013) study by the Chinese linguist San Duanmu seems to bear this out. Duanmu argues – again, convincingly in my view – that vowel inventories in all the world’s languages can be represented using just four basic features, which we may take as [low], [front], [round], and [raised] (here I am following the standard convention of putting features within square brackets []). What makes Duanmu’s argument convincing is that he has tested his hypothesis against data on languages from around the world, using two separate data sources (the databases UPSID and P-Base). Duanmu’s analysis puts an upper bound of 16 on the number of vowel phonemes a language could possible have.

    This means that any vowel in any of the world’s languages can be represented by a 4-vector (a,b,c,d), where a, b, c, and d represent the features [low], [front], [round], and [raised], and can be either 0 or 1. A 1 signifies that the feature is present in that vowel phoneme, and a 0 signifies it is absent. We can then define four basic vowels as A = (1,0,0,0), I = (0,1,0,0), U = (0,0,1,0), and G = (0,0,0,1). Any vowel phoneme in any of the world’s languages can then be represented using combinations of these four basic vowels. For example, we can define the compound vowels E = A+I = (1,1,0,0), O = A+U = (1,0,1,0), and Y = I+U = (0,1,1,0). Under this conception, the space of all vowels is represented by a mathematical structure called a tesseract, or 4-dimensional hypercube.

    In the linguistics literature, the vowel space is usually represented as a quadrilateral, based on the shape of the tongue when pronouncing different vowel sounds. However the British linguist Geoff Lindsey and others have argued that the vowel space is better represented as a triangle, based on the resonant frequencies of different vowel sounds. Thus, the quadrilateral representation is based on the articulation of different vowel sounds, whereas the triangular representation is based on their acoustic characteristics. The representation of the vowel space as a tesseract in theory provides an alternative view. Unfortunately, it is impossible for us 3-dimensional creatures to visualize a 4-dimensional shape such as a tesseract.

    What we can do is transform a 4-dimensional shape into 3 dimensions, and again into 2 dimensions if we like, both of which we can visualize. One way of doing this is by taking what is known as the ‘vertex figure’ of the 4-dimensional shape. Roughly speaking, this is the figure exposed when a corner of a general polytope – that is, a figure with flat faces – is sliced off. The vertex figure of a tesseract is a tetrahedron, the 3-dimensional analogue of a triangle. The orthographic projection of a tetrahedron into 2-dimensional space results in a quadrilateral in general, or a triangle when viewed from a face or vertex. Thus, the representation of the vowel space as a tesseract provides a way to reconcile the two different 2-dimensional representations found in the literature.

    The representation of the vowel space as a tesseract also provides a way to formalise a phonological theory known as Element Theory. The basic idea here is that all phonemes are made up of combinations of elements or phonological primes, which in the context of vowels are usually taken as A, I, and U, plus one other element which we are calling G. Element Theory has a number of versions and has since its inception in the mid-1980s been reformed in various ways with the aim of reducing the element inventory, to avoid over generation (being able to generate more structures than attested cross-linguistically). The empirical work of San Duanmu has now demonstrated that only four elements are required to represent the vowel phonemes of all the world’s languages.

    It is remarkable that Element Theory apparently existed for 30 years before anyone bothered to check the data to determine how many elements were actually needed. And that was just for vowel phonemes; as far as I know, the number of elements required to represent the consonant phonemes of all the world’s languages is still an open question. I will return to this question in a future blog post.

  • I recently attended a lively group discussion on the English revolution. I have to confess that prior to attending this discussion I wasn’t aware that England had even had a revolution. That’s because the English revolution is usually referred to by another name: the ‘English Civil War’. Of course I was aware that England had a civil war, but it had never occurred to me that it could also be considered a revolution. Yet it is well-known that the English civil war involved violent removal of the ruling class. The reason this historical event is rarely referred to as was it really was – a revolution – boils down to propaganda. If the masses in England became aware that a revolution happened in the past, they might realize that it could just as easily happen again.

    A question naturally arises on re-framing the English civil war as a revolution: if England had a revolution then why does England still have a royal family? Subsequent revolutions in France and Russia both successfully removed the ruling royal families in these nations for good. In contrast, England was a republic for just 11 years, from 1649 to 1660. So why didn’t the revolution stick? The usual reason given for this reversal was the death of Oliver Cromwell in 1658 and the failure of his son and successor, Richard Cromwell, to provide stable leadership, which led to widespread political unrest and a popular desire to return to a more traditional system. Whilst this is true, it is a surface-level explanation that falls foul of the ‘great man’ fallacy.

    A better explanation is that the revolution did not have popular support among the masses, a majority of whom never wanted to get rid of the monarchy completely. The lesson here is that for a revolution to succeed it needs widespread popular support. This explanation raises the question of why a majority of people in England did not want to remove the monarchy. It is a question that remains relevant today. A March 2024 poll found that 62% of respondents wanted to keep the monarchy, whilst just 26% preferred an elected head of state (the remaining 12% presumably don’t care). Attitudes are changing, with younger people significantly less likely to support the royals. Still, as a revolutionary socialist I find the continued support for the monarchy utterly bizarre.

    Nonetheless, this is a question we socialists need to grapple with if we want to have any chance of bringing about a successful revolution. The obvious reason for the royals’ continued support is again propaganda. As a rule, the British press is pathetically obsequious towards members of the royal family. There are exceptions to this of course – notably princess Diana, who was famously hounded to her death by the press – but in general, the royals get a ridiculously easy ride from our sycophantic, supine media. It’s not just the media though. Take, for example, the risible decision by Transport for London to name London’s new tube line ‘the Elizabeth line’. Now think about how we would laugh if we heard that North Korea had named a new underground line ‘the King Jong-Il line’.

    The propaganda would have taken a different form in the 1600s, but it would have been there just the same. Throughout history, ruling classes everywhere have relied on propaganda to manufacture consent for their existence and maintain the status quo. The propaganda of the 1600s would have had a much more religious tone, with the masses being told that a monarch’s authority comes directly from god and not from earthly subjects or institutions (this idea is often referred to as the ‘divine right of kings’). This points to the real reason why the English revolution failed. Cromwell was a devout Puritan whose religious beliefs were the driving force behind his political actions. He believed in a direct, personal relationship with God, as well as the need to cleanse society of sin and frivolity.

    Thus, rather than sweeping away the religious propaganda structure that had enabled the English monarchy to remain in place for so long, Cromwell and his followers effectively reinforced it. In his 1871 work The Civil War in France, Marx gave a now famous quote  that was later highlighted by Lenin in his 1917 work The State and Revolution: ‘… the working class cannot simply lay hold of the ready-made state machinery, and wield it for its own purposes’. Marx was arguing that the existing state machine is an instrument of class oppression, developed to serve the interests of the ruling class, and that this machinery must be broken up by the working class. Although Marx was talking specifically about the French revolution, his argument applies to the English revolution that occurred over 100 years prior.

    Ultimately, the English revolution failed because it did not break up existing power structures. This highlights that a revolution does not end once power has been seized; this is just the beginning. The existing state machine must then be abolished and replaced with an entirely new form of proletarian state or social organization, one which involves workers directly and democratically controlling the means of production and political power.

  • It is generally agreed that English spelling is a mess. There is less agreement about whether anything should be done about it. To be sure, we can all agree that there are good reasons to reform English spelling. It would make English easier to learn to read, write, and pronounce, as well as making it more useful for international communication and reducing educational costs, therefore enabling teachers and learners to spend more time on more important subjects. Spelling reform would particularly benefit people with dyslexia, a condition which primarily impacts phonological processing and memory. The complexity of English spelling forces learners to remember many specific, arbitrary rules and exceptions, which places a heavy burden on dyslexic individuals.

    Paradoxically, the irregularity of English spelling is what makes some so resistant to the idea of reforming it. There is a large cognitive investment that goes into accurately learning English spelling, and those that have made that investment are reluctant – consciously or not – to make changes that would render that investment unnecessary. This has led to correct spelling being used as a kind of shibboleth people who believe themselves to be well-educated use to distinguish themselves from, and look down upon, those they consider to be less well-educated. I’m sure we have all been guilty of this linguistic chauvinism at times. I know I have. Even now, the sight of a misplaced apostrophe can send me into an irrational rage.

    Another reason spelling reform has never taken off in the English-speaking world is that nobody can agree on how English spelling should be reformed, or even what alphabet should be used to reform it. One suggestion is to use the International Phonetic Alphabet, or IPA, which is an alphabetic system of phonetic notation based primarily on the Latin script. The IPA has the advantage of being internationally recognized (as the name suggests), so using this would make learning English much easier for non-native speakers. The IPA has the disadvantage that it uses symbols that do not exist on standard keyboards and would therefore not be practical to use in a spelling reform. Realistically, a reform would need to be based on the standard Latin alphabet to have any chance of widespread adoption.

    A better approach is to start with the Latin alphabet and to assign each letter a unique phoneme (sound value). This is straightforward for the letters a, b, d, e, f, g, i, j, k, l, m, n, o, p, r, s, t, v, w, y, and z, all of which have a clear default phoneme assigned to them under current English spelling. The same applies to the digraphs ch, sh, zh, and ng. The digraph th represents two different phonemes, one unvoiced and the other voiced; it would be logical to use dh for the latter. Similarly the letter u represents two vowel phonemes, one rounded and the other rounded. It makes sense to use u for the former, but not for the latter – the so-called ‘schwa’ sound – as that can be spelled using either o or u in stressed position, and any vowel letter in unstressed position.

    The fact that any vowel letter can be used to represent schwa under current English spelling creates perhaps the thorniest problem any spelling reform must solve, namely: how should we represent schwa whilst remaining as aligned as possible with current spelling? However, current English spelling has some clear patterns that point to a solution. The letter e is usually used to represent schwa before r, and the letter o is usually used before w; elsewhere, the letter u is usually used in stressed position, and the letter a in unstressed position. It would make sense to carry these conventions over to a reformed spelling system. Unfortunately, this solution runs into a difficulty where the same letter is used to represent multiple sounds.

    This is not as big a problem as it seems though, as there are few pairs of words that differ by only one sound where that sound would be represented by the same vowel letter. And where that does occur – for example, in the words ‘put’ and ‘putt’, which would both be spelled ‘put’ under the proposed system –  the context would resolve any ambiguity. It should also be noted that this is a problem with current English spelling too, as there are many words that are spelled the same as other words but pronounced differently, such as ‘lead’, ‘live’, ‘read’, and ‘tear’. These words would be spelled differently under the proposed system: ‘led’/’liyd’, ‘layv’/liv’, ‘red’/’riyd’, and ‘teer’/’tier’. In fact the proposed system would have a lot fewer of these so-called heteronyms than the current system.

    Sow dheer yuw hav it. A nyuw and kampliyt speling riform prapowzal fer dhiy Ingglish langwij. Wot da yuw think?

  • The tendency of the rate of profit to fall, henceforth TRPF, is a theory according to which the rate of profit – the ratio of the profit to the amount of invested capital – decreases over time. The hypothesis is usually attributed to Marx, but economists as diverse as Adam Smith, John Stuart Mill, David Ricardo, and William Stanley Jevons referred explicitly to the TRPF as an empirical phenomenon that demanded further theoretical explanation, although they differed on the reasons why the TRPF should necessarily occur. The TRPF is usually considered a cornerstone of Marxism. But what does it actually tell us, why is it sometimes thought of as controversial, and why should we care?! These are the questions I will attempt to answer in this blog post.

    First, what exactly do we mean by the ‘rate of profit’? Marx defined the rate of profit by r = s/(v+c), where s is surplus value, v is variable capital, and c is constant capital. Surplus value is usually taken as synonymous with profit, but they are not quite the same: surplus value refers to revenue minus labour costs, whereas profit refers to revenue minus total costs. Thus, surplus value will always be greater than or equal to profits. Variable capital refers to labour costs, and constant capital refers to money invested in production, including money invested in fixed assets and raw materials plus any other non-labour expenses. Marx then defined the ‘rate of exploitation’ by e =  s/v, and the ‘organic composition of capital’ by o = c/v. Using these definitions, the rate of profit can be written as r = e/(1+o).

    So if that’s the rate of profit then why should we expect it to fall? The explanation is actually quite straightforward. The central idea that Marx had was that overall technological progress has a ‘labour-saving bias’, and that the overall long-term effect of saving labour time in producing commodities with the aid of more and more machinery had to be a falling rate of profit. This is obvious from the definition: over time, constant capital has a tendency to increase, and as constant capital appears in the denominator of the formula for the rate of profit, it is inevitable that the rate of profit will go down. However, Marx maintained that this decrease was only a tendency and that there are also counteracting factors operating which can temporarily cause an increase in the rate of profit.

    I think this is one reason why the TPRF is sometimes considered controversial: critics accuse Marx of making his theory unfalsifiable by introducing his counteracting factors. But all Marx was really saying was that we cannot expect the rate of profit to fall in a completely straight line, and that there are bound to be ups and downs and fluctuations along the way. This seems entirely reasonable for a hypothesis in the social sciences. It is not realistic to expect to see a definitive pattern in economic data, as anyone who has ever tried to analyse such data will tell you. But what does the data actually show? A bit of internet searching shows that the rate of profit has indeed fallen across many different countries over the last 100-150 years, exactly as Marx predicted. (Have a look for yourself if you don’t believe me.)

    So we have determined what the TRPF is, and that it is a real phenomenon. The relevance of the TRPF is that it is a core component of Marx’s crisis theory, which posits that the inherent tendency for the rate of profit to fall is the fundamental reason for capitalism’s cyclical booms and busts. The key idea here is that capitalists use the rate of profit as an indicator of how well things are going, so when the rate of profit begins to fall, they are less likely to invest in new equipment and this pullback in investment slows down the economy. However, it will also prevent the rate of profit from decreasing so quickly (or at all), which in turn will encourage capitalists to invest more, leading to a boom. This boom leads to an increase in the amount of constant capital which reduces the rate of profit – and on it goes.

    This is another reason why the TRPF is considered controversial. There are many competing views about what causes capitalism’s booms and busts and not everybody agrees with Marx that the TRPF is to blame, even if they accept that it is a real phenomenon. However, in 2015 the economist Esteban Maito published some empirical work on the TRPF which seems to back up Marx’s hypothesis. Maito provides data on the rate of profit across 14 countries over the last 150 years. Not only does he demonstrate that the rate of profit has decreased in those countries over that time, but his data also show some interesting trends. We can see the rate of profit decrease more sharply in the periods immediately prior to the crashes of 1929 and 2008, as well as before and during the ‘stagflation’ era of the 1970s.

    Of course, eyeballing some graphs doesn’t prove anything, and more rigorous tests would need to be done to validate the hypothesis that the TRPF is the primary cause of capitalism’s booms and busts. But it suggests that Marx may have been on the right lines, which is all the more remarkable when you consider that he made this hypothesis over 150 years ago, before any of the crises mentioned above had actually occurred.