Naively, communication between humans exists to convey information. In fact, humans seem to communicate for many purposes. For instance, entertainment: it recently occurred to me that (anecdotally) a long time passed between the adoption of iphones among my social circle and the habit of checking them to settle factual disputes. One possible explanation is that people enjoy disputing facts as a sort of competition or flex. I think that the construction and usage of language interferes in many ways with its usage for determining truth.
One significant, malign feature of language as used by humans (at least in my cultural context) is its weaponization for debate. It has been argued that in the tribal context where it evolved, language is mainly useful for convincing others to yield social power. This idea is somewhat connected to the Machiavellian intelligence hypothesis; perhaps language was developed in a coordination/domination arms race. In any case, the idea of constructing strong arguments or cultivating rhetoric as a form of intellectual development is still quite common today. I experienced this very clearly during my brief time as a debater in high school; there is a presumption that being able to construct persuasive arguments for your position is closely linked to reasoning and communicating well. I think that in fact it can be corrosive to both abilities.
The most difficult part of matching nontechnical beliefs to reality may be cultivating the willingness to relinquish false beliefs. This is an imprecise claim based mostly on anecdotal experience, but I think it carries a lot of truth: certainly in discussions on the internet, people seem to be quite capable of finding a great deal of evidence supporting their current position, but tend to see changing their minds as a form of defeat (see my tangentially related post on confirmation bias). I personally experience some unpleasant emotions when I force myself to admit I was wrong, and despite a certain degree of practice and a philosophical inclination to do so as soon as the evidence disfavors me, the word "force" is still appropriate. The practice of being given a position and then finding evidence to support it, and constructing an argument around that evidence for your predetermined conclusion, is in my mind actively opposed to truth-seeking. Debaters often alternate between arguing pro and con, taking opposing positions in every other round. Unfortunately, I do not think that these two non-truth seeking activities somehow cancel out to produce truth-seeking (though the idea of prosecution versus defense in the justice system sort of relies on the presumption that they can, an observation that I believe was originally made in the Sequences). In particular, this alternation doesn't seem to train any mechanism for recognizing when the truth lies somewhere between two extremes, which it often (but not always) does.
One area where communication is particularly lacking useful tools is in the expression of uncertainty. It should be clear from the above why this might be the case. However, conveying uncertainty is incredibly important to communicating accurately and honestly. There are many qualifiers that convey low confidence (I think that, probably, etc.) The problem with these quantifiers is that they don't do a great job of expressing the cause of uncertainty. It is possible to be uncertain about a statement for various importantly different reasons. Sometimes we arrive at our conclusions by consciously using heuristic arguments or methods (let's say, heuristic uncertainty - Kahneman and Tversky might label this by with "dual" term bias uncertainty, but I feel that is too general). Sometimes we remember only vaguely some information, perhaps private information, that implies our statement (retrieval uncertainty). These two cases are important to distinguish between for various reasons. First, heuristic conclusions can sometimes be "verified" by others with a few moments of thought, and are perhaps more "consistently unreliable." That is, in some applications they are likely to be close to the correct answer, but perhaps unlikely to be exactly correct (one might say probably approximate correct, but also probably not exactly correct). On the other hand, an unreliably retrieved memory is often either an exactly correct data point or totally useless and misleading. Retrieval uncertainty can (and often should) be interrogated since it implies some level of private information. If this private information turns out to be fairly high certainty, it may be much more reliable than the heuristic arguments of other members of the conversation. Though I haven't verified this empirically, I expect that a system for efficiently communicating types of uncertainty would be very, very useful. Lack of empirical verification may also be worth reifying, though it might be considered a subclass of heuristic uncertainty.
I have adopted a shoulder roll for heuristic and an extended shrug for retrieval uncertainty, though I don't use these consistently yet. Perhaps a leading ~ or ? respectively in writing, though I see disadvantages to both such as possible but ~unlikely conflict with logical negation. Any suggestions for further extensions would be interesting! There have been somewhat related discussions of communication on lesswrong, though ?this is probably the first reification of these specific terms.