Monday, September 20, 2021

Sega Drops Teaser Trailer for New RPG Debuting at TGS

Ahead of the 2021 Tokyo Game Show, Sega announced that […] The post Sega Drops Teaser Trailer for New RPG Debuting at TGS appeared first on ComingSoon.net.
More
    Home Tags Social media

    social media

    Pakistan bans TikTok for hosting &#039 again;obscene' content

    Tiktok users in Pakistan won't be able to access the app yet again after the Peshawar High Court issued an order to ban the short-form video sharing platform in the country. According to Al Jazeera, Ary News TV and other local news outlets, the court made the ruling during a hearing into a petition against the app. TikTok had around 33 million users in Pakistan (out of a total of 100 million users) as of last month, App Annie told TechCrunch. After receiving the order from the court, Pakistan Telecom Authority (PTA) published a statement on Twitter confirming that it has issued directions to service providers "to immediately block access to the TikTok App" in compliance.  In respectful compliance to the orders of the Peshawar High Court, PTA has issued directions to the service providers to immediately block access to the TikTok App. During the hearing of a case today, the PHC has ordered for the blocking of App. — PTA (@PTAofficialpk) March 11, 2021 Chief Justice Qaiser Rashid Khan said TikTok videos "are peddling vulgarity in society," Ary News TV wrote, and that the platform hosts unethical and immoral content. He also decided that the app should remain blocked until TikTok cooperates with authorities after PTA told the court that it approached the company to have "objectionable and indecent" content removed to no avail. In a statement sent to Al Jazeera, a spokesperson defended the platform and its moderation practices: "TikTok is built upon the foundation of creative expression, with strong safeguards in place to keep inappropriate content off the platform. In Pakistan we have grown our local-language moderation team, and have mechanisms to report and remove content in violation of our community guidelines. We look forward to continuing to serve the millions of TikTok users and creators in Pakistan who have found a home for creativity and fun." This isn't the first time the app was banned in the country, which recently rolled out digital laws that give regulators the power to censor content. As Financial Times notes, the new laws require companies to remove offensive content, including ones that threaten the "integrity, security and defense of Pakistan." The first time TikTok was banned was before the new laws came out, though, after authorities decided that it hosted "immoral and indecent" videos. That said, PTA lifted the ban a few days later after TikTok promised to moderate clips according to Pakistani "societal norms" and laws. 

    Facebook files to dismiss FTC antitrust charges

    Facebook says the antitrust lawsuits targeting the company’s acquisitions of Instagram and WhatsApp should be dismissed. The company issued its first official response to antitrust charges from the Federal Trade Commission and 46 state attorneys general, saying that the government was seeking a “do-over.” Facebook filed motions to dismiss both cases. In a statement, the company said neither lawsuit had made a credible case for antitrust. “Antitrust laws are intended to promote competition and protect consumers,” Facebook wrote. “These complaints do not credibly claim that our conduct harmed either.” The response comes three months after the company was hit with antitrust charges from the FTC and the state attorneys general. Both cases allege that Facebook has engaged in anti-competitive behavior and that its deals to acquire Instagram and WhatsApp were meant to neutralize companies it saw as a threat. Facebook said this amounted to a do-over as both acquisitions were scrutinized, and approved, by the FTC years ago. In a new court filing, Facebook’s lawyers say that the FTC “has not alleged facts amounting to a plausible antitrust case,” and that the charges come amid a “fraught environment of relentless criticism of Facebook for matters entirely unrelated to antitrust concerns.” Regarding the case from state AGs, Facebook says that the states “lack standing to bring the case” and that they “waited far too long to act.” In its motion to dismiss the state charges, Facebook referred to the states’ case as “afterthought claims.” In addition to its acquisitions, both cases also pointed to Facebook’s platform policies, and how it treated third-party developers. The state case and the FTC lawsuit both called out Facebook’s treatment of Twitter-owned Vine, which saw its access to Facebook’s API cut off in 2013 in a decision that was approved by Mark Zuckerberg. In its motion to dismiss the FTC case, Facebook lawyers said the company “had no duty to make its platform available to any other app.” The FTC and the state AGs have until April to respond to Facebook’s motions to dismiss. As The Wall Street Journal points out, actually getting the charges dismissed before a trial requires Facebook to “meet a high legal standard” that may be difficult to clear. Even if it did, a dismissal would hardly be the end of Facebook’s antitrust woes. The company is also facing an antitrust investigation from Congress and regulators in the European Union.

    Twitter tests full-size images previews in your prey on Android and iOS

    The next time you’re browsing through your Twitter timeline on your phone, you may notice a small but impactful change to how the service handles images. With a small subset of iOS and Android users, Twitter has started testing full-sized picture previews, allowing users to see timeline images in their original aspect ratio. Before starting today’s test, Twitter cropped all non-16:9 images to maintain uniformity on your timeline. Provided a tweet only includes one image and it’s in a relatively standard aspect ratio, the change will make it so that you don’t have to tap on an image to see it in its entirety. In theory, that should make the experience of browsing through your timeline more streamlined. Additionally, the company announced that it’s also testing a feature that allows people to upload 4K-quality images from their iPhone or Android device. In the “Data usage” section of the settings menu, you’ll see a toggle to enable high-quality image uploads. It might seem like a small thing for Twitter to change how it displays images, but it’s a significant one all the same. Over the years, there have been a lot of complaints about how the service handles images. Those came to a head last year when people found that Twitter’s image-cropping algorithm was focusing on white faces over black ones. “With this test, we hope to learn if this new approach is better and what changes we need to make to provide a ‘what you see is what you get’ experience for tweets with images,” said Twitter Chief Design Officer Dantley Davis of the test. Twitter hasn’t said when the cropping change could make its way to all users. As with any test, the company could decide to keep things the way they are.

    Facebook investigated over &#039 reportedly;systemic' racism in hiring

    Facebook has publicly committed to fighting racism, but there are concerns that isn't translating to its recruitment practices. Reuters sources say the US Equal Employment Opportunity Commission (EEOC) is investigating possible "systemic" racism in Facebook's hiring and job promotions. Facebook program manager Oscar Veneszee Jr. and four candidates have reportedly accused the social network of discriminating against Black applicants and staff through subjective evaluations and pushing racial stereotypes. Three of the people brought the case in July 2020, with a fourth joining in December. The EEOC tapped investigators for systemic cases by August 2020, but they've only received briefings from both sides of the case over the past four months. While the full extent of the alleged violations isn't clear, one of the policies in dispute stems from hiring bonuses. The company hands out up to $5,000 in bonuses if a referred candidate is hired, but those referrals tended to reflect the existing employee demographics and disadvantage Black applicants (who make up 3.9 percent of US employees as of last June). There are no guarantees the EEOC investigation will lead to formal action. The Commission declined to comment, but Facebook said it took discrimination accusations "seriously" and investigated "every case." This isn't the first time Facebook's hiring has come under fire. In 2017, a Bloomberg report pointed out that a handful of executives typically made final hiring decisions and tended to use metrics that favored culturally similar candidates, such as people endorsed by existing staff or those who went to certain schools. Facebook maintained that it had diverse hiring teams that brought in candidates from a wide range of backgrounds, but its incentive system was having problems at the time. If the allegations hold up, they'll suggest that some of those years-old complaints still persist. An EEOC determination could lead to reforms, even if it's just through public pressure.

    YouTube removes five TV channels run by Myanmar's military

    YouTube has taken down channels connected to five TV stations run by Myanmar's military (Tatmadaw) in the wake of further violence in the country. The service told the New York Times it removed the channels for breaking its community guidelines but it didn't explain the decision any further. Among the removed channels were ones for Myanmar Radio and Television and Myawaddy Media, which air news, sports and Tatmadaw propaganda. Dozens of peaceful protestors were killed this week during demonstrations against last month's coup. Protesters have organized rallies online and shared footage of violence carried out by the military and police. The Tatmadaw responded by blocking social media services and occasionally shutting down internet access entirely. YouTube removed dozens of other channels connected to the military following Myanmar's elections last year. It's not the only social media giant to take action to stem Tatmadaw's attempts to spread misinformation. Facebook and Instagram banned the military from those platforms last week, along with ads by Tatmadaw-owned businesses. Facebook blocked Mywaddy on February 2nd, the day after the coup.

    TikTok forms an EU Safety Advisory Council following scrutiny from regulators

    TikTok has established a Safety Advisory Council for Europe that will shape its content moderation policies amid a buildup of complaints and probes against the app in the continent. The creation of the nine-member panel, which includes anti-bullying and mental health experts, follows the US Content Advisory Council the Chinese-owned platform formed last March.  TikTok's European delegates include academics and members of non-profits that specialize in dealing with child abuse, addiction, violent extremism, discrimination and cyber-bullying. It says the council will help it to develop "forward-looking policies" that address current and emerging challenges facing its thriving community. While the above issues aren't strictly limited to the video-sharing platform and affect all social networks alike, TikTok is facing heightened scrutiny in Europe over security and privacy concerns that relate to the welfare of its younger user base. EU watchdogs had already launched a glut of probes investigating its data-sharing practices before the app was hit with a massive consumer complaint last month. In it, a grouping of consumer watchdogs alleged it was breaking GDPR laws by hoarding personal information, hiding its policies behind vague terms and conditions and blurring the line between advertising and user-generated content.  At the time, TikTok said it was ready to meet with the consumer organizations (a collective known as the BEUC) to discuss their concerns. Despite some of its failings, its latest move is also aimed at engaging with regulators as it seeks to further promote its user safety policies.

    Facebook bowed to demands from Turkey to block among its military opponents

    When Turkey launched its Afrin offensive in early 2018 to dislodge Kurdish minorities from Northern Syria, the country ordered Facebook to block the page of a prominent militia group in the area known as the People’s Protection Units or YPG. Forced to make a decision, the company prioritized staying online over objecting to censorship, new internal emails obtained by ProPublica show. Since then, the social media giant has blocked users in Turkey from accessing the YPG’s Facebook page. Facebook complied with the order even though, like the US government, it does not consider the group a terrorist organization. “... We are in favor of geo-blocking YPG content if the prospects of a full-service blockage are great,” the team that accessed the situation wrote to Joel Kaplan, the company’s vice-president of global public policy. “Geo-blocking the YPG is not without risk — activists outside of Turkey will likely notice our actions, and our decision may draw unwanted attention to our overall geo-blocking policy.” The subsequent discussion was short. When Kaplan told Facebook COO Sheryl Sandberg and CEO Mark Zuckerberg he agreed with the recommendation, Sandberg sent a single-sentence response. “I am fine with this,” she said. When asked about the emails, Facebook confirmed it blocked the page after it received a legal order from the Turkish government. “We strive to preserve voice for the greatest number of people. There are, however, times when we restrict content based on local law even if it does not violate our community standards,” Facebook spokesperson Andy Stone told ProPublica. “In this case, we made the decision based on our policies concerning government requests to restrict content and our international human rights commitments. Publicly, Facebook has also said free speech is one of its core tenets. “We believe freedom of expression is a fundamental human right, and we work hard to protect and defend these values around the world,” it said in a recent blog post on Turkey. In many ways, the story of Facebook's YPG ban is one of poor transparency. In the same above statement, the company noted it discloses content restrictions in its biannual transparency reports. However, YPG isn’t explicitly mentioned on the section of its website it has dedicated to Turkey. And if you try to visit the group’s page through a Turkish server using a VPN, the only error message you will see is one that says, “the link may be broken, or the page may have been removed.” 

    Instagram says it’s cracking down on harassment in direct messages and will permanently ban accounts that use direct messages to harass or threaten other users. The change comes after several football players in the UK have reported racist attacks on the app.  In a blog post, Instagram acknowledged the “racist online abuse targeted at footballers” and added that direct messages are more difficult for the company to police because the company doesn’t use the same automated technology it uses to detect bullying in comments. But with the new policy, Instagram says it will take “tougher action” when harassment in direct messages is reported.  Previously, Instagram would temporarily limit an account’s ability to use DMs when harassment was reported. Now, the company says that repeated “violating messages” will result in a permanent suspension from its service. “We’ll also disable new accounts created to get around our messaging restrictions, and will continue to disable accounts we find that are created purely to send abusive messages,” Instagram writes.  The company also says that it’s working on making the DM controls that allow users to disable messages from accounts they don’t follow available to everyone.

    Senate Democrats introduce a bill to limit Section 230

    A trio of Democratic senators have introduced a bill that would make online platforms more liable for the content that their users post, particularly if those posts lead to harm. The SAFE TECH Act aims to limit the protections that social media companies are afforded under Section 230, a provision of the Communications Decency Act 1996 that shields them from accountability for user activity. Sens. Mark Warner, Mazie Hirono and Amy Klobuchar said in a joint statement that the bill would make the likes of Facebook, Twitter and YouTube more liable for "enabling cyberstalking, targeted harassment and discrimination." According to Protocol, staffers for the three senators consulted with civil rights groups and experts in online harm as they drafted the bill. If the act becomes law, platforms wouldn't be able to claim Section 230 liability for ads or other paid content — many of them have been limiting political ads to varying degrees. The provision would not shield companies from complying with court orders or alleged violations of civil rights, antitrust, cyberstalking or human rights laws at state and federal level. Additionally, the bill makes it clear that Section 230 would not protect platforms from civil actions stemming from wrongful deaths. The bill also aims to limit Section 230 at a broader level to ensure the provision applies only to speech and not all online activity, such as the dealing of illicit goods. It would modify the language of Section 230 (currently "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider") by replacing "information" with "speech." For years, Section 230 provided a ‘Get Out of Jail Free’ card to platform companies as their sites are openly and repeatedly used by bad actors to cause damage and injury. Section 230 will be brought into the present-day with the SAFE TECH Act creating targeted exceptions. (2/8) — Mark Warner (@MarkWarner) February 5, 2021 Although the Democrats hold power in both houses of Congress, it remains to be seen whether the senators can muster enough support to push through their bill. While there's a general consensus among politicians (and even the likes of Facebook and Twitter) that Section 230 should be changed, there are differing opinions on how best to do so. Other senators have recently introduced proposals to reform the provision, which underscores the fact many politicians are training their sights on it.

    Must Read

    Get notified on updates    OK No thanks