Listening to Stephen Wolfram about computational reducibility and the role of the observer in determining what is reducible. We blur out certain details so we can see patterns and we use those patterns to predict the future or compute an answer. Of course we've always known this at some level but at this moment it is especially important to think about which observers are creating our AI models. What are they blurring out exactly? Maybe I care about what they are eliding for the sake of prediction and automation.