Anyone else feel like the reason why humanity doesn’t bother to fix important issues for a long time is because the people simply don’t care enough to group up and fix them. I mean when I try to educate folks on complex problems they often seem like they don’t want to discuss it and quickly defend the status quo saying “that’s just how things are”

But we can’t keep ignoring these issues because then it could delay necessary progress for thousands of years.

  • jimmycrackcrack@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    14 hours ago

    The problem with answering that, is that there’s no set standard for the appropriate amount of apathy so really however much there is, that’s how much there should be and not too little or too much, that’s just how apathetic humans are and there’s nothing to compare against for judging appropriate levels. Why are we as apathetic as we are? In my opinion it’s pretty similar to why climate change is so difficult to address, which makes sense as apathy is one of the biggest stumbling blocks to addressing it. In general, it’s more difficult to energise, co-ordinate and sustain collective human effort on a large scale for issues that don’t seem immediate, tangible, easily attributable, physically visible and where the solution and action to be taken isn’t simple to understand, or the improvement simple to observe and also reasonably short term (or at least promises to be). Long term, society wide projects usually require more than just an appeal to better nature. People caring, people wanting to help each other, people wanting fairness or kindness or just treatment, as innate desires does work to motivate, but I think tends to work on mostly on the smaller scale, when it’s for small in-groups, preferably people we’ve actually met and with immediate social pressures to reinforce these pro-social desires.

    Human beings are capable of complex, difficult, awe-inspiring projects for “good” or “bad” but those tend to involve more diffuse motivations and more immediate rewards/incentives where those motivations are their most removed from the original instigators. Some few people involved might be motivated by altruism or something esoteric like an interest in science or a religious belief, but if their goals involve the masses it’s usually going to mean filtering their motivation down through stakeholders, to careerists, money makers and then on down to people looking for subsistence and in many cases down to people who are enslaved and don’t want to be killed or harmed and so work.

    To top all that off there’s the more obvious problem of the difficulty in keeping more and more people in bigger and bigger projects all on the same page about what to do, how to do it, or if we even want to do it. If the important issue you’re thinking of is for example inequality, it’s going to be very hard to get agreement on what that actually is, if that’s even a good or bad thing, how we should deal with it or even if we should deal with it and many of the people in this debate will probably be passionate in their position. Complex “important” issues also tend to involve beneficiaries who would somewhat understandably not want to work against their own interests and so shape their environment to the best of their ability such that the easier thing to do is tolerate the issue making the near impossible mountain of getting human beings together for the greater good harder still by design. This theory maybe has some flaws, depending on how you frame the important issue. If for example the important issue were crime, you could argue that for the most part for most people it’s fairly easy to get them not to just murder strangers on a whim or for some petty gain, even racists probably walk through an average day surrounded by people of many ethnicities and cultures but don’t generally (with notable exceptions) need to be convinced or induced not to physically harm everyone they walk past and this tends to hold true on a larger scale not just the in-groups as I described, but as a rule of thumb, in my view I think this is basically how we operate. How much we care, how much we can muster courage, how much personal risk or resources or energy we can spare for manifesting ideals is usually proportional to the degree of direct impact they have upon us personally, how close we are to the people affected by an issue and how easy it is to identify and rectify the issue and also how long it will take and how often you’ll have to act. I think you could probably draw direct, inversely proportional lines on a scale of how much apathy is shown and a declining slope on any of those measures. I suspect this is from our nature and biological origins, but this is not an assertion I can back up rigorously.

    Finally, depending on the issue, sometimes it really is rationally better to tolerate an issue where all the solutions are bad and could make things worse. Tends to be difficult to reach consensus on when we’re in such a situation.