For the same reason that [[Every organism wants to grow]], every intelligent system (including AI systems) can have accumulation of power as an [instrumental goal](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value). This can mean that systems, even when they have a harmless goal, can do harmful things in service of that instrumental goal. This means you don't need something obscure like the paperclip maximizer for AI to have harmful effect. Something much more reasonable, like an AI trying to solve the Riemann hypothesis might decide to take over all computing resources. Even though the intrinsic goals are different, those two very different AIs can *converge* on their instrumental goal, take over the world. [Wikipedia](https://en.wikipedia.org/wiki/Instrumental_convergence) #published 2025-02-09