Twitter Inc. is turning to greater automation in its battle against abuse on its platform, saying its software will start automatically demoting response posts that it determines are likely to disrupt or disturb users’ conversations.

The change, which will roll out over the coming week, isn’t designed to deal with accounts or messages that violate Twitter’s TWTR, -1.92%[1]  content policies, which the company says it already takes action against. Instead, the new approach targets accounts that Twitter says exhibit signs of “troll-like behavior” and that “distort and detract from the public conversation on Twitter.” Rather than deleting those accounts’ messages, Twitter will push them down in the list of replies people see to their tweets, or in search results, particularly for popular hashtags.

Executives at Twitter said the move is among the more important it has made to address longstanding criticism about bad behavior on its service. “It is shaping up to be one of the biggest impact changes we have made,” said Twitter Chief Executive Jack Dorsey.

Twitter and other social networks have sometimes struggled to balance demands to filter out abusive content with criticism that doing so risks imposing the values of their employees on their users. Some right-wing activists, for example, have complained that past efforts by Twitter to curb harassment unfairly harmed conservative commentators.

An expanded version of this story can be found at WSJ.com[2]

Also popular on WSJ.com:

Think American elections are bad? Indian voters get 1,000 texts a day[3]

Why the NFL stopped seeing gambling as a threat—and started to see a windfall[4] ...

Read more from our friends at MarketWatch