Groucho Marxism

Questions and answers on socialism, Marxism, and related topics

  • Calculus is the mathematical study of change. It has two major branches: differential calculus and integral calculus. The former concerns rates of change and the slopes of curves, whereas the latter concerns accumulation of quantities and areas under or between curves. Calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz, and was instrumental in enabling Newton to formulate his laws of motion. Despite its obvious success in describing the motion of objects, however, differential calculus in particular came in for some criticism. Perhaps the most prominent critic was the 18th century Irish philosopher and bishop George Berkeley, who critiqued differential calculus in his 1734 pamphlet The Analyst.

    Berkeley’s argument was that the method relied on infinitesimals, which were treated as quantities that are simultaneously zero and non-zero. To see this, recall that to take the derivative of a function f() defined on the real numbers, we first calculate the quantity [f(x+h)-f(x)]/h, then evaluate the result when h = 0. The quantity h is considered an infinitesimal of the type that Berkeley was referring to. In the first step this cannot be zero, as we divide by h, and division by zero is not allowed; but then in the second step we set h equal to zero! You can see where Berkeley was coming from with his critique. However, it is now generally accepted that Berkeley’s criticism was answered with the rigorous development of limits in the 19th century.

    The solution mathematicians came up with was to define the derivative as the limit of [f(x+h)-f(x)]/h as h ‘tends to zero’. This was given a precise definition using something called the ‘epsilon-delta’ definition of a limit, which I won’t go into here. Suffice it to say that the epsilon-delta definition relies on the existence of infinite sets. In a previous blog post I have criticized the assumption that infinite sets exist – the so-called ‘axiom of infinity – on materialist grounds. A critic of this position might argue that removing infinite sets from mathematics would remove our ability to rigorously define the derivative of a function using limits; and they would be right. But there is an alternative formulation of calculus which obviates the need for such a definition altogether.

    Discrete calculus is an analogue of calculus for functions defined on discrete domains. In the remainder of this blog post I will go through some of the basics. Consider a function f() defined on the finite domain X = {0,1,…,N}. The discrete derivative of f() is defined by Df(x) = f(x+1)-f(x). The discrete derivative is linear: D(af+bg) = aDf + bDg for all integer constants a,b and functions f,g defined on X. We can derive a discrete analogue of the product rule: D(fg)(x) = f(x+1)Dg(x)+Df(x)g(x). We can also derive a discrete analogue of the quotient rule: D(f/g)(x) = [Df(x)g(x)-f(x)Dg(x)]/[g(x)g(x+1)]. Recall that in standard (continuous) calculus, d(xn)/dx = nxn-1. In discrete calculus we have the analogous rule given by Dxn = nxn-1, where xn is the ‘falling power’: xn = x(x-1)…(x-n+1).

    Euler’s constant, e, is the number with the property that d(ex)/dx = ex. The discrete analogue of e is 2, as D(2x) = 2x+1-2x = 2x. The discrete integral is simply a sum: ∑a→bf(x) = f(a)+f(a+1)+…+f(b-1); note that the sum does not include f(b). The fundamental theorem of discrete calculus immediately follows from the definition of the discrete integral: ∑a→bDf(x) = f(b)-f(a). It is straightforward to determine from the fundamental theorem that ∑a→bxn = (bn+1-an+1)/(n+1). Note that it follows from the product rule above that Df(x)g(x) = D(fg)(x)-f(x+1)Dg(x). Integrating (summing) both sides between a and b, we obtain a discrete analogue of the integration by parts formula : ∑a→bDf(x)g(x) = f(b)g(b)-f(a)g(a)-∑a→bf(x+1)Dg(x). This formula allows us to do more advanced integrations.

    We can also define discrete second derivatives. The obvious definition involves simply applying the first derivative twice: D2f(x) = D(Df)(x) = f(x+2)-2f(x+1)+f(x). There is an alternative definition though. Letting D+f(x) = f(x+1)-f(x) and Df(x) = f(x)-f(x-1), we can set D2f(x) = D+(Df) = D(D+f) = f(x+1)-2f(x)+f(x-1). This definition has the advantage of being symmetric around x. In continuous calculus, sin() and cos() are functions f() with the property that d2f/dx = -f. To find discrete analogues of these, we must find functions f() such that D2f = -f. That is, we must solve f(x+1)-f(x)+f(x-1) = 0, or f(x+1) = f(x)-f(x-1). Setting f(0) = 0 and f(1) = 1, we get the sequence (0,1,1,0,-1,-1,0,1,…); this is the discrete analogue of sin(). Setting f(0) = 1 and f(1) = 0, we get (1,0,-1,-1,0,1,1,0,…); this is the discrete analogue of cos().

    Let dsin() and dcos() denote these discrete analogues of sin() and cos(). Then from the definitions, we have D+(dsin(x)) = dcos(x) and D+(dcos(x)) = -dsin(x+1). Similarly, we have D(dsin(x)) = dcos(x-1) and D(dcos(x)) = -dsin(x). These are analogous to the relations between sin() and cos() and their derivatives in standard calculus. Thus, we have successfully defined discrete analogues of derivatives (including the product and quotient rules), integrals (including integration by parts), second derivatives, Euler’s constant, and the trigonometric functions sin() and cos(). There are yet more analogous definitions that can be made, but I will leave these for a future blog post.

  • In classical logic, statements or propositions are always either true or false. One of the justifications given for this rule is that contradictions – statements that are both true and false – entail everything. Which is to say that if you allow a single contradiction, you can prove any proposition you like. This feature is known as the principle of explosion and can be easily proved, as follows. Suppose the proposition P is both true and false; then P is true, so (P or Q) is true for any proposition Q; but P is also false, so Q must be true; and since Q was arbitrary, the result is proved. However, there are alternatives to classical logic that allows for the coexistence of contradictory statements without leading to a logical explosion where anything can be proven true. These logics are referred to as ‘paraconsistent’.

    It is clear from the above that in order to avoid the principle of explosion, we must abandon either the principle of ‘disjunction introduction’ – P implies (P or Q) – or the principle of ‘disjunction syllogism’ – if (P or Q) then (not P) implies Q. In practice, we can choose to abandon either or both of these principles if we want to. Some might object to tampering with the laws of logic in this way. Logic is the foundation upon which the whole of mathematics rests, and messing around with this foundation may seem rather foolhardy. But it is important to point out that, much as we are free to choose the axioms of mathematics however we like, we are free to choose the rules of logic however we like as well. Just like mathematics, logic is, at the end of the day, a human invention.

    The view that (some) statements can be both true or false is known as ‘dialetheism’. Dialetheism is not a system of formal logic; it is a thesis about truth that influences the construction of a formal logic. We have already seen that there exist systems of formal logic that allow for contradictions without leading to the principle of explosion. Whether we choose to use one of these systems rather than classical logic depends on our views on dialetheism. One argument in favour of dialetheism is that it resolves well-known paradoxes that involve contradictory statements. The most famous of these is the liar paradox, which is encapsulated by the statement ‘I am lying’. The paradox arises when trying to determine whether this statement is true or not.

    The classical way to solve this problem is to revise the axioms of the logic so that self-contradictory statements are not allowed. Dialetheists, on the other hand, respond to this problem by simply accepting the statement ‘I am lying’ as both true and false. To a dialetheist, therefore, there is no paradox at all. Another argument in favour of dialetheism is that it is more closely aligned with human reasoning and human language. Ambiguous situations may cause humans to affirm both a proposition and its negation. For example, if someone stands in the doorway to a room, it may seem reasonable both to affirm that the person is in the room and to affirm that they are not in the room. Another example arises in statements such as ‘that car is red’, which one person may evaluate as true and another as false.

    One prominent advocate for dialetheism is the British philosopher Graham Priest. According to Priest, there is a close connection between dialetheism and the dialectics of Hegel and Marx. In fact Priest goes so far as to argue that both Hegel and Marx were dialetheists. Priest gives two examples of Marx’s apparent dialetheism. The first concerns the notion of a commodity, which, according to the argument set out in Das Kapital, is both a use-value and an exchange-value. This entails a contradiction as use-values and exchange-values are incommensurate. The second example concerns wage labour. Under capitalism, wage labourers are free to sell their labour power as they choose; yet they are hardly free in any meaningful sense as the alternative is starvation and death.

    The close connection between Marxism and dialetheism is not surprising as one of the things Marx is most famous for is pointing out the contradictions of capitalism. In formal logic, a contradiction is simply a statement which is both true and false. Whether Marx meant the term in this technical sense, or in a more colloquial sense, is open to debate. Regardless, in a recent (2024) paper, Priest shows that paraconsistent logic, which allows for true contradictions, can provide a basis for formalizing the somewhat nebulous concept of the dialectic. Specifically, Priest provides a formal logical model of a dialectical progression, a dynamic concept found in the writings of both Hegel and Marx. I shall return to this model in a future blog post.

  • The ‘self’ is a complex concept that refers to an individual’s unique sense of being, encompassing their thoughts, identity, and consciousness. Although it is difficult to pin down a precise definition of the self, most of us believe we have one. Or that we are one. Actually, which is it? Already we are starting to see how the concept becomes problematic as soon as you begin to examine it. The self has a peculiar property that the more you look for it, the less tangible it seems. Where is this ‘self’ exactly? Most of us think of the self as a little person sitting inside our heads. But this can’t be right, because that little person must also have a self, and where do they sit? Inside the little person’s head? This is just sending us into an infinite regress.

    Another possibility is to define the self to be identical to the body. This doesn’t really work either though, as most people would say that removing a part of their body – say, their leg – would not make them a fundamentally different person. Perhaps instead we can take the self to be identical to the brain. This seems a better definition, as removing a part of someone’s brain can actually turn someone into a different person, at least in the eyes of others. We see this happen when victims of severe head injuries undergo fundamental personality changes. ‘They’re just not in there anymore’ is a common refrain from distraught family members, suggesting that they believe the injury victim’s old self has been either modified or replaced.

    There is a problem with the ‘self = brain’ definition too though. The brain is not static; it is changing constantly through a process called neuroplasticity, whereby it reorganizes its structure, functions, and connections in response to external stimuli. In contrast, the self is usually understood to be a static thing which stays the same throughout a person’s life (barring any serious head injuries). However, the vast majority of neurons in the brain are not replaced, meaning the neurons you are born with are the ones you have for your entire life. Perhaps then we should identify the self not with the brain as a whole, but with neurons within the brain? This doesn’t really work either, as brain function is a result of the interactions between neurons rather than of neurons themselves.

    Whatever part or parts of the brain we try to associate with the self, we will always run into the same difficulty. It is the interactions between different parts of the brain that results in thoughts, identity, and consciousness, rather than the brain’s individual components. Maybe then we should consider the self as an emergent property of the complex system that is the brain. However if we are to take this as our understanding of self then we must abandon the intuitive idea of the self as a fixed entity. Thus we have arrived at an impasse. If we want the self to be a fixed entity, then we must abandon our intuitive idea of what the self is; conversely, if we want to keep this idea, we must abandon the notion of the self as a fixed entity.

    The only way out of this impasse is to accept that the self, as usually understood, is an illusion. This illusion emerges from a collection of different, often conflicting, thoughts, memories, and bodily processes, rather than from a fixed part of the brain. This subjective experience of a solid, independent self is a product of the brain’s storytelling and perception-making, not an objective reality. The brain creates a narrative to make sense of the world, and the self is the main character in this story. It is a fabrication that emerges from the brain’s storytelling powers. There is no single, anatomically located self in the brain, as most of us like to imagine; instead, the feeling of self arises from a complex network of processes.

    In a previous blog post I argued that free will, as usually understood, is also an illusion. The obvious parallel between the argument put forward there and the argument being put forward here is no coincidence. The illusion of self and the illusion of free will are two sides of the same coin. In fact the latter can be seen as a consequence of the former, as the illusion of free will stems from the idea of a fixed self which is in control of our actions. The illusion of self leads also to the notion of ego, whereby we see the world only from our own perspective. In another blog post I pointed out just how harmful this notion can be to our well-being and the well-being of those around us. Understanding that the self is an illusion is the first step in taming the ego, which in turn is the key to a happy life.

  • In linguistics, the term ‘phoneme’ refers to any of the perceptually distinct units of sound in a specified language that distinguish one word from another; for example p, b, d, and t in the English words pad, pat, bad, and bat. Languages vary considerably in the number of phonemes they have, from as few as 9 in the Brazilian indigenous language Pirahã to as many as 141 in the southern African language ǃXũ. It is usually claimed that there are 44 phonemes in English: 24 consonant phonemes and 20 vowel phonemes. However the Hungarian linguist Peter Szigetvári has recently argued – convincingly, in my view – that English has just 6 vowel phonemes, and that the remaining 18 vowel sounds can be considered combinations of these six vowels plus a glide (y, w, or h).

    This suggests that the number of vowel phonemes may have been overestimated in other languages too. A recent (2013) study by the Chinese linguist San Duanmu seems to bear this out. Duanmu argues – again, convincingly in my view – that vowel inventories in all the world’s languages can be represented using just four basic features, which we may take as [low], [front], [round], and [raised] (here I am following the standard convention of putting features within square brackets []). What makes Duanmu’s argument convincing is that he has tested his hypothesis against data on languages from around the world, using two separate data sources (the databases UPSID and P-Base). Duanmu’s analysis puts an upper bound of 16 on the number of vowel phonemes a language could possible have.

    This means that any vowel in any of the world’s languages can be represented by a 4-vector (a,b,c,d), where a, b, c, and d represent the features [low], [front], [round], and [raised], and can be either 0 or 1. A 1 signifies that the feature is present in that vowel phoneme, and a 0 signifies it is absent. We can then define four basic vowels as A = (1,0,0,0), I = (0,1,0,0), U = (0,0,1,0), and G = (0,0,0,1). Any vowel phoneme in any of the world’s languages can then be represented using combinations of these four basic vowels. For example, we can define the compound vowels E = A+I = (1,1,0,0), O = A+U = (1,0,1,0), and Y = I+U = (0,1,1,0). Under this conception, the space of all vowels is represented by a mathematical structure called a tesseract, or 4-dimensional hypercube.

    In the linguistics literature, the vowel space is usually represented as a quadrilateral, based on the shape of the tongue when pronouncing different vowel sounds. However the British linguist Geoff Lindsey and others have argued that the vowel space is better represented as a triangle, based on the resonant frequencies of different vowel sounds. Thus, the quadrilateral representation is based on the articulation of different vowel sounds, whereas the triangular representation is based on their acoustic characteristics. The representation of the vowel space as a tesseract in theory provides an alternative view. Unfortunately, it is impossible for us 3-dimensional creatures to visualize a 4-dimensional shape such as a tesseract.

    What we can do is transform a 4-dimensional shape into 3 dimensions, and again into 2 dimensions if we like, both of which we can visualize. One way of doing this is by taking what is known as the ‘vertex figure’ of the 4-dimensional shape. Roughly speaking, this is the figure exposed when a corner of a general polytope – that is, a figure with flat faces – is sliced off. The vertex figure of a tesseract is a tetrahedron, the 3-dimensional analogue of a triangle. The orthographic projection of a tetrahedron into 2-dimensional space results in a quadrilateral in general, or a triangle when viewed from a face or vertex. Thus, the representation of the vowel space as a tesseract provides a way to reconcile the two different 2-dimensional representations found in the literature.

    The representation of the vowel space as a tesseract also provides a way to formalise a phonological theory known as Element Theory. The basic idea here is that all phonemes are made up of combinations of elements or phonological primes, which in the context of vowels are usually taken as A, I, and U, plus one other element which we are calling G. Element Theory has a number of versions and has since its inception in the mid-1980s been reformed in various ways with the aim of reducing the element inventory, to avoid over generation (being able to generate more structures than attested cross-linguistically). The empirical work of San Duanmu has now demonstrated that only four elements are required to represent the vowel phonemes of all the world’s languages.

    It is remarkable that Element Theory apparently existed for 30 years before anyone bothered to check the data to determine how many elements were actually needed. And that was just for vowel phonemes; as far as I know, the number of elements required to represent the consonant phonemes of all the world’s languages is still an open question. I will return to this question in a future blog post.

  • I recently attended a lively group discussion on the English revolution. I have to confess that prior to attending this discussion I wasn’t aware that England had even had a revolution. That’s because the English revolution is usually referred to by another name: the ‘English Civil War’. Of course I was aware that England had a civil war, but it had never occurred to me that it could also be considered a revolution. Yet it is well-known that the English civil war involved violent removal of the ruling class. The reason this historical event is rarely referred to as was it really was – a revolution – boils down to propaganda. If the masses in England became aware that a revolution happened in the past, they might realize that it could just as easily happen again.

    A question naturally arises on re-framing the English civil war as a revolution: if England had a revolution then why does England still have a royal family? Subsequent revolutions in France and Russia both successfully removed the ruling royal families in these nations for good. In contrast, England was a republic for just 11 years, from 1649 to 1660. So why didn’t the revolution stick? The usual reason given for this reversal was the death of Oliver Cromwell in 1658 and the failure of his son and successor, Richard Cromwell, to provide stable leadership, which led to widespread political unrest and a popular desire to return to a more traditional system. Whilst this is true, it is a surface-level explanation that falls foul of the ‘great man’ fallacy.

    A better explanation is that the revolution did not have popular support among the masses, a majority of whom never wanted to get rid of the monarchy completely. The lesson here is that for a revolution to succeed it needs widespread popular support. This explanation raises the question of why a majority of people in England did not want to remove the monarchy. It is a question that remains relevant today. A March 2024 poll found that 62% of respondents wanted to keep the monarchy, whilst just 26% preferred an elected head of state (the remaining 12% presumably don’t care). Attitudes are changing, with younger people significantly less likely to support the royals. Still, as a revolutionary socialist I find the continued support for the monarchy utterly bizarre.

    Nonetheless, this is a question we socialists need to grapple with if we want to have any chance of bringing about a successful revolution. The obvious reason for the royals’ continued support is again propaganda. As a rule, the British press is pathetically obsequious towards members of the royal family. There are exceptions to this of course – notably princess Diana, who was famously hounded to her death by the press – but in general, the royals get a ridiculously easy ride from our sycophantic, supine media. It’s not just the media though. Take, for example, the risible decision by Transport for London to name London’s new tube line ‘the Elizabeth line’. Now think about how we would laugh if we heard that North Korea had named a new underground line ‘the King Jong-Il line’.

    The propaganda would have taken a different form in the 1600s, but it would have been there just the same. Throughout history, ruling classes everywhere have relied on propaganda to manufacture consent for their existence and maintain the status quo. The propaganda of the 1600s would have had a much more religious tone, with the masses being told that a monarch’s authority comes directly from god and not from earthly subjects or institutions (this idea is often referred to as the ‘divine right of kings’). This points to the real reason why the English revolution failed. Cromwell was a devout Puritan whose religious beliefs were the driving force behind his political actions. He believed in a direct, personal relationship with God, as well as the need to cleanse society of sin and frivolity.

    Thus, rather than sweeping away the religious propaganda structure that had enabled the English monarchy to remain in place for so long, Cromwell and his followers effectively reinforced it. In his 1871 work The Civil War in France, Marx gave a now famous quote  that was later highlighted by Lenin in his 1917 work The State and Revolution: ‘… the working class cannot simply lay hold of the ready-made state machinery, and wield it for its own purposes’. Marx was arguing that the existing state machine is an instrument of class oppression, developed to serve the interests of the ruling class, and that this machinery must be broken up by the working class. Although Marx was talking specifically about the French revolution, his argument applies to the English revolution that occurred over 100 years prior.

    Ultimately, the English revolution failed because it did not break up existing power structures. This highlights that a revolution does not end once power has been seized; this is just the beginning. The existing state machine must then be abolished and replaced with an entirely new form of proletarian state or social organization, one which involves workers directly and democratically controlling the means of production and political power.

  • It is generally agreed that English spelling is a mess. There is less agreement about whether anything should be done about it. To be sure, we can all agree that there are good reasons to reform English spelling. It would make English easier to learn to read, write, and pronounce, as well as making it more useful for international communication and reducing educational costs, therefore enabling teachers and learners to spend more time on more important subjects. Spelling reform would particularly benefit people with dyslexia, a condition which primarily impacts phonological processing and memory. The complexity of English spelling forces learners to remember many specific, arbitrary rules and exceptions, which places a heavy burden on dyslexic individuals.

    Paradoxically, the irregularity of English spelling is what makes some so resistant to the idea of reforming it. There is a large cognitive investment that goes into accurately learning English spelling, and those that have made that investment are reluctant – consciously or not – to make changes that would render that investment unnecessary. This has led to correct spelling being used as a kind of shibboleth people who believe themselves to be well-educated use to distinguish themselves from, and look down upon, those they consider to be less well-educated. I’m sure we have all been guilty of this linguistic chauvinism at times. I know I have. Even now, the sight of a misplaced apostrophe can send me into an irrational rage.

    Another reason spelling reform has never taken off in the English-speaking world is that nobody can agree on how English spelling should be reformed, or even what alphabet should be used to reform it. One suggestion is to use the International Phonetic Alphabet, or IPA, which is an alphabetic system of phonetic notation based primarily on the Latin script. The IPA has the advantage of being internationally recognized (as the name suggests), so using this would make learning English much easier for non-native speakers. The IPA has the disadvantage that it uses symbols that do not exist on standard keyboards and would therefore not be practical to use in a spelling reform. Realistically, a reform would need to be based on the standard Latin alphabet to have any chance of widespread adoption.

    A better approach is to start with the Latin alphabet and to assign each letter a unique phoneme (sound value). This is straightforward for the letters a, b, d, e, f, g, i, j, k, l, m, n, o, p, r, s, t, v, w, y, and z, all of which have a clear default phoneme assigned to them under current English spelling. The same applies to the digraphs ch, sh, zh, and ng. The digraph th represents two different phonemes, one unvoiced and the other voiced; it would be logical to use dh for the latter. Similarly the letter u represents two vowel phonemes, one rounded and the other rounded. It makes sense to use u for the former, but not for the latter – the so-called ‘schwa’ sound – as that can be spelled using either o or u in stressed position, and any vowel letter in unstressed position.

    The fact that any vowel letter can be used to represent schwa under current English spelling creates perhaps the thorniest problem any spelling reform must solve, namely: how should we represent schwa whilst remaining as aligned as possible with current spelling? However, current English spelling has some clear patterns that point to a solution. The letter e is usually used to represent schwa before r, and the letter o is usually used before w; elsewhere, the letter u is usually used in stressed position, and the letter a in unstressed position. It would make sense to carry these conventions over to a reformed spelling system. Unfortunately, this solution runs into a difficulty where the same letter is used to represent multiple sounds.

    This is not as big a problem as it seems though, as there are few pairs of words that differ by only one sound where that sound would be represented by the same vowel letter. And where that does occur – for example, in the words ‘put’ and ‘putt’, which would both be spelled ‘put’ under the proposed system –  the context would resolve any ambiguity. It should also be noted that this is a problem with current English spelling too, as there are many words that are spelled the same as other words but pronounced differently, such as ‘lead’, ‘live’, ‘read’, and ‘tear’. These words would be spelled differently under the proposed system: ‘led’/’liyd’, ‘layv’/liv’, ‘red’/’riyd’, and ‘teer’/’tier’. In fact the proposed system would have a lot fewer of these so-called heteronyms than the current system.

    Sow dheer yuw hav it. A nyuw and kampliyt speling riform prapowzal fer dhiy Ingglish langwij. Wot da yuw think?

  • The tendency of the rate of profit to fall, henceforth TRPF, is a theory according to which the rate of profit – the ratio of the profit to the amount of invested capital – decreases over time. The hypothesis is usually attributed to Marx, but economists as diverse as Adam Smith, John Stuart Mill, David Ricardo, and William Stanley Jevons referred explicitly to the TRPF as an empirical phenomenon that demanded further theoretical explanation, although they differed on the reasons why the TRPF should necessarily occur. The TRPF is usually considered a cornerstone of Marxism. But what does it actually tell us, why is it sometimes thought of as controversial, and why should we care?! These are the questions I will attempt to answer in this blog post.

    First, what exactly do we mean by the ‘rate of profit’? Marx defined the rate of profit by r = s/(v+c), where s is surplus value, v is variable capital, and c is constant capital. Surplus value is usually taken as synonymous with profit, but they are not quite the same: surplus value refers to revenue minus labour costs, whereas profit refers to revenue minus total costs. Thus, surplus value will always be greater than or equal to profits. Variable capital refers to labour costs, and constant capital refers to money invested in production, including money invested in fixed assets and raw materials plus any other non-labour expenses. Marx then defined the ‘rate of exploitation’ by e =  s/v, and the ‘organic composition of capital’ by o = c/v. Using these definitions, the rate of profit can be written as r = e/(1+o).

    So if that’s the rate of profit then why should we expect it to fall? The explanation is actually quite straightforward. The central idea that Marx had was that overall technological progress has a ‘labour-saving bias’, and that the overall long-term effect of saving labour time in producing commodities with the aid of more and more machinery had to be a falling rate of profit. This is obvious from the definition: over time, constant capital has a tendency to increase, and as constant capital appears in the denominator of the formula for the rate of profit, it is inevitable that the rate of profit will go down. However, Marx maintained that this decrease was only a tendency and that there are also counteracting factors operating which can temporarily cause an increase in the rate of profit.

    I think this is one reason why the TPRF is sometimes considered controversial: critics accuse Marx of making his theory unfalsifiable by introducing his counteracting factors. But all Marx was really saying was that we cannot expect the rate of profit to fall in a completely straight line, and that there are bound to be ups and downs and fluctuations along the way. This seems entirely reasonable for a hypothesis in the social sciences. It is not realistic to expect to see a definitive pattern in economic data, as anyone who has ever tried to analyse such data will tell you. But what does the data actually show? A bit of internet searching shows that the rate of profit has indeed fallen across many different countries over the last 100-150 years, exactly as Marx predicted. (Have a look for yourself if you don’t believe me.)

    So we have determined what the TRPF is, and that it is a real phenomenon. The relevance of the TRPF is that it is a core component of Marx’s crisis theory, which posits that the inherent tendency for the rate of profit to fall is the fundamental reason for capitalism’s cyclical booms and busts. The key idea here is that capitalists use the rate of profit as an indicator of how well things are going, so when the rate of profit begins to fall, they are less likely to invest in new equipment and this pullback in investment slows down the economy. However, it will also prevent the rate of profit from decreasing so quickly (or at all), which in turn will encourage capitalists to invest more, leading to a boom. This boom leads to an increase in the amount of constant capital which reduces the rate of profit – and on it goes.

    This is another reason why the TRPF is considered controversial. There are many competing views about what causes capitalism’s booms and busts and not everybody agrees with Marx that the TRPF is to blame, even if they accept that it is a real phenomenon. However, in 2015 the economist Esteban Maito published some empirical work on the TRPF which seems to back up Marx’s hypothesis. Maito provides data on the rate of profit across 14 countries over the last 150 years. Not only does he demonstrate that the rate of profit has decreased in those countries over that time, but his data also show some interesting trends. We can see the rate of profit decrease more sharply in the periods immediately prior to the crashes of 1929 and 2008, as well as before and during the ‘stagflation’ era of the 1970s.

    Of course, eyeballing some graphs doesn’t prove anything, and more rigorous tests would need to be done to validate the hypothesis that the TRPF is the primary cause of capitalism’s booms and busts. But it suggests that Marx may have been on the right lines, which is all the more remarkable when you consider that he made this hypothesis over 150 years ago, before any of the crises mentioned above had actually occurred.

  • I often go and sell copies of the Socialist in my local town on a Saturday morning. Most members of the public I meet doing this are very friendly, even if they do not necessarily agree with my politics. So I was somewhat taken aback a couple of weeks ago when my paper-selling comrade and I were aggressively accosted by some people who turned out to be Labour councillors. They began by churning out all the usual stuff about us socialists letting in Reform. Then they accused us of only selling socialist papers in order to make ourselves feel good (?!). Next, they tried to gaslight me and my comrade by telling us that Labour weren’t making cuts to public services. Finally, they accused us of not having a coherent plan for how improved public services could be paid for. Here I shall respond to these points in turn.

    The councillors’ claim that socialists like me and my comrade are somehow letting in Reform is obviously nonsense. In fact this is the clearest case of projection you will ever come across. It is Labour who are opening the door to Reform by offering zero solutions to the problems people of this country are facing and by normalizing right-wing talking points, particularly around the vilification of immigrants. I can also assure you that the reason I give up time to sell the Socialist on a Saturday morning is not to make myself feel good. Rather, it is to spread socialist ideas to the general public, and raise a bit of money for the Socialist Party at the same time. Somebody has to spread these ideas as we clearly cannot rely on the mainstream media to do so.

    The claim that Labour isn’t making cuts to public services is also nonsense. The current Labour government has made many spending decisions that have effectively resulted in real-terms cuts to public services, particularly in ‘unprotected’ areas like local government, the criminal justice system, and the civil service. The claim that the UK government cannot afford to pay for improved public services is similarly nonsensical. However, I do think we socialists need to be careful in how we respond to this assertion. The standard response that we can raise the money by increasing taxes on the rich immediately runs into the counterargument that the rich will then just leave the country, taking their money with them. So how should we respond instead?

    I once asked a wealthy acquaintance of mine how he was able to pay for a house he was purchasing in central London. ‘With money’ came his answer. (Ask a stupid question…) I think that this is how we socialists should respond when faced with the ‘how are you going to pay for it?!’ question that inevitably arises whenever we suggest that we should perhaps try to improve public services a little bit. As I explained in a previous blog post, a sovereign government that issues its own currency has no need to tax before it spends and can effectively create as much money as it likes. In the same way that my wealthy acquaintance had access to money on demand to buy property, a sovereign government has access to money on demand to improve public services (if it wants to).

    The usual retort is that increased government spending without an equivalent increase in taxation inevitably leads to inflation. Indeed, this is precisely what one of the labour councillors said to me when I put it to him that the UK government does not need to rely on taxing rich people to fund public services. The first thing to note about this is the shifting of goalposts. In claiming that an increased budget deficit inevitably leads to inflation, my interlocutor was unwittingly conceding that the UK government can, in fact, increase spending without increasing taxation. The second thing to note is that the claim is empirically false. Japan, for example, persistently runs deficits of over 200% of GDP, more than twice that of the UK, but has had a lower inflation rate every year for the past decade.

    I am not denying that budget deficits can lead to inflation, just that they necessarily do. In the words of American economist Milton Friedman: ‘Government deficits can and sometimes do contribute to inflation. However, the relation between deficits and inflation is far looser than is widely believed.’ Friedman is revered by many on the right and is one considered one of the main architects of neoliberalism. Yet even he concedes that budget deficits don’t necessarily cause inflation. So why is this idea so entrenched? I think it stems from a fundamental misunderstanding of what gives money value. Most people assume that the value money is driven by the amount of it that exists, whereas the value of money is actually determined by the amount that people have to work in order to obtain it.

    The Labour councillors I encountered on that Saturday morning were an example of what the late, great American anthropologist David Graeber referred to as the ‘extreme centre’. Centrists like to portray themselves as pragmatists, but the truth is that they are often the most dogmatic people you will ever meet. I hope that this blog post will help my fellow socialists to cut through some of their nonsensical arguments.

  • The 2025 United Nations Climate Change Conference, more commonly known as COP30, will be held in Brazil from 10 to 21 November. In this context, COP stands for Conference of the Parties, and refers to the annual meeting of the nearly 200 countries that have signed the United Nations Framework Convention on Climate Change. At these conferences, representatives from around the world meet to review progress, negotiate new measures, and make decisions on how to tackle climate change. At least that’s what is supposed to happen. As the COP30 name suggests, this will be the 30th of these meetings, yet we are still nowhere near solving the problem of global warming. It seems fair to say therefore that the COP initiative has not been a success.

    One thing COP meetings have been successful in doing is setting targets, such as the agreement to keep warming below 2 degrees (Celsius) and ideally below 1.5 degrees, which was negotiated at COP21 in Paris. Another agreement was made at COP28 to triple renewable energy capacity and double the rate of energy efficiency improvements by 2030. But anyone can set targets; sticking to them is what matters. And in this respect, the COPs have been a failure. Global emissions continue to rise, atmospheric carbon concentrations are increasing, and the world is on a path to significantly exceed the 1.5 degree warming limit. Most projections put the world on a path to warming of 2.3-2.5 degrees by the end of the century, with the 1.5 degree threshold likely be permanently breached by the 2030s.

    There are several reasons why the COPs have had such limited success in addressing the climate crisis.  First, there is a lack of political will, with politicians invariably being more preoccupied with short-term concerns. Second, there is a shortfall in climate finance, with wealthier nations being unwilling to provide promised and adequate support to developing countries for climate change adaptation. Third, the negotiating process has been heavily influenced by the fossil fuel lobby, with a large number of lobbyists attending the conferences. Fourth, the agreements made, although legally binding in principle, are not legally enforceable in practice. And fifth, increased geopolitical tensions are now diverting attention and resources away from collaborative climate action.

    All of these reasons stem from the fact that under capitalism, nations and corporations are forced to compete rather than collaborate. To understand why this leads to the issues outlined above, we need to invoke a classic problem in game theory known as the prisoner’s dilemma. This problem involves two ‘rational’ agents, each of whom can either cooperate for mutual benefit or betray their partner (‘defect’) for individual gain. The dilemma arises from the fact that while defecting is ‘rational’ for each agent, cooperation yields a higher payoff for each. Climate change can be modeled as a prisoner’s dilemma because competition leads nations to avoid actions required for global cooperation (i.e. to ‘defect’), even though cooperation would yield the best long-term outcome for everyone.

    To solve the problem of climate change, therefore, we need to find a way to solve the prisoner’s dilemma. Luckily, someone has already done this. The American Marxist economist John Roemer has demonstrated that the dilemma arises from the definition of ‘rational’, which in game theory and in economics more generally is usually taken to mean ‘acting purely out of self-interest’. But there is no reason that we need to define ‘rational’ in this way. Roemer has come up with an alternative definition based on Kant’s categorical imperative: ‘act only according to that maxim whereby you can at the same time will that it should become a universal law’. He (Roemer) then goes on to demonstrate that under this definition of rationality, the prisoners will cooperate and achieve the optimal outcome.

    Now that’s all well and good, a critic might say, but that is not how nations and corporations act in the real world. And of course that’s true, as the lack of action on climate change clearly demonstrates. The reason for this was already stated above: under capitalism, nations and corporations are forced to act out of self-interest. In other words, they are forced to action ‘rationally’ in the usual sense of the term, rather than in the Kantian sense as defined by Roemer. But there is nothing special or natural about the acting out of self-interest form of rationality; it is simply how nations and corporations are forced to act under capitalism. To stop them from doing this, we need to get rid of capitalism and replace it with a system that encourages Kantian-style cooperation. The future of our planet depends on it.

  • We Marxists talk about class a lot. Specifically, we like to talk at length about how under capitalism there exists two classes – the working class and the capitalist class – whose interests are diametrically opposed. But is this really true? In order to answer that question, we first need to define what we mean by ‘working class’ and ‘capitalist class’. Actually we only need to define one of these, as it is usually assumed that whoever does not belong to one of these classes automatically belongs to the other. The textbook Marxist definition of a capitalist is someone who owns capital – or more accurately, who owns the means of production. Therefore, a worker is someone who doesn’t own the means of production. Straightforward, right?

    Well, not really. All we have really done is defined one thing – a capitalist – in terms of another thing –  the means of production. But what do we mean by that? Broadly, the means of production refers to the physical facilities and resources used by a society for producing goods and services. So a capitalist is someone who owns a factory, machinery, land, or raw materials that get used in production; conversely, a worker is someone who doesn’t own any of these things. This seems like a more watertight definition but there are still a few leaks. Are landlords capitalists under this definition, for example? The answer is yes, although it might not be immediately obvious why. The reason is that under capitalism, property is considered part of the means of production.

    In that case, shouldn’t all homeowners be defined as capitalists? Again, the answer – according to the definition above – is yes, but most homeowners probably do not think of themselves in this way. A complication is that most people who buy a house are only able to do so by taking out a mortgage, thereby effectively committing themselves to working for the next 30 or so years or face having their home repossessed by the bank. But aside from the small number of unfortunate homeowners who find themselves with negative equity, the majority of people who own a home have a net positive position in productive assets, and this is sufficient – again, according to the definition above – to make them a capitalist.

    This doesn’t seem quite right to me though. If this is our definition of a capitalist then we can’t really say that the interests of capitalists and workers are diametrically opposed. The argument set out in the previous paragraph points to a better definition. Perhaps it is not your relation to capital that makes you a capitalist, but your relation to labour. It makes more sense to me to define someone as a capitalist if their material conditions would not deteriorate if they decided to stop working; or conversely, to define someone as a worker if their material conditions would deteriorate if they stopped working. Under this definition, most homeowners are not capitalists, as they need to work to pay off their mortgage or risk losing their home.

    Most landlords would not qualify as capitalists under this definition either as the majority of landlords need to supplement their rental income by working. The ones that don’t are generally those who own a large portfolio of properties, and these people clearly belong to the capitalist class. Such landlords would probably protest that they have to work hard to maintain their large portfolio of properties (my heart bleeds for them). But here I am referring to ‘working’ in the technical Marxist sense of ‘selling your labour power for a wage’. That is not what landlords with large property portfolios are doing when they work, regardless of how hard they may be working. The same goes for CEOs of corporations, even if they give themselves a salary to try to create the illusion that they are merely an employee.

    Under this revised definition it seems fairly clear that interests of workers and capitalists are diametrically opposed. If you need to sell your labour power for a wage, it is in your interest that your wages go up; conversely, if you are the CEO of a corporation and derive your income from profits then it is in your interest that wages go down, as from the point of view of a corporation wages are a cost that eats into profits. The situation with the portfolio-owning landlord is not quite so clear-cut as they are in theory indifferent as to whether wages go up or down. But it is obviously in their interest to increase rents and this is against the interest of workers, at least those who are forced to rent property, as most workers are at some stage in their lives.

    One feature of capitalism which sets it apart from older modes of production such as feudalism is that the class a person belongs to is not fixed at birth. In theory it should be possible for worker to become a capitalist by accumulating a sufficient amount of capital. In practice, it is extremely difficult to do this, and the few that manage it only do so through being extraordinarily lucky (by winning the lottery, for example). Regardless, the perceived fluidity of class under capitalism plays into the hands of the capitalists as it makes it much more difficult to develop class consciousness among workers. Most workers probably do not even consider themselves to be working class, if they even think about class at all.

    The development of class consciousness among workers is a necessary step for overthrowing capitalism. Presenting a clear definition of what makes somebody a capitalist – and by extension, a worker – is a prerequisite for this. I hope to have made a contribution towards elucidating such a definition here.