Emerging technologies and computational capabilities have enhanced communication channels and formed platforms that allow almost everyone to publish various types of information. Much of this content is knowingly or unknowingly mistaken or lies. The channels vary greatly in providing access and in their ease of use.
In topics related to public policy, the problems are frequently structural because of an imbalance of information exchange and trust between government, subject matter experts, and the public. Most of the public policy agenda emerge in academic meetings and journals where leaders in government and university professors work closely together. They use the same language, have standard frameworks, and use common channels designed by and for their expertise. On the other hand, the content describing the proposed changes in policy is hard for most users to translate and is communicated via channels that are hard to find and difficult to use. In the age of social media, the public turns to channels that are always available and easy to use, and find comfort in content from sources like QAnon that are easy to understand even if the information is false.
Our intent is to explore the dimensions of content such as transparency, trust, and responsible debate and how to maximize the public’s ability to access and use channels. Our premise is that new platforms are needed to help us bridge the data-meaning gap and help the public with a reliable way to obtain, create, and use information essential to their well-being.