
NoneNone
The run-heavy Eagles have decided they don’t want to go through the rest of the season without a fullback. After losing linebacker/fullback Ben VanSumeren late last week to a knee injury that sent him to injured reserve, the team used tight end Grant Calcaterra in that role Sunday against the Baltimore Ravens. Calcaterra, however, will likely have to move into the role of the No. 1 tight end after Dallas Goedert suffered a knee injury against the Ravens that could keep him out for at least Sunday’s game against the Carolina Panthers.
DENVER — So you're the most valuable player of that annual Thanksgiving Day backyard flag football game. Or played tackle football on any level. Or ran track. Or dabbled in basketball. Or toyed with any sport, really. Well, this may be just for you: USA Football is holding talent identification camps all over the country to find that next flag football star. It's "America's Got Talent" meets "American Idol," with the stage being the field and the grand prize a chance to compete for a spot on a national team. Because it's never too early to start planning for the 2028 Olympics in Los Angeles, where flag football will make its Summer Games debut. Know this, though — it's not an easy team to make. The men's and women's national team rosters are at "Dream Team" status given the men's side has captured six of the last seven world championships and the women three in a row. To remain on top, the sport's national governing body is scouring every football field, park, track, basketball court and gym to find hidden talent to cultivate. USA Football has organized camps and tryouts from coast to coast for anyone ages 11 to 23. There are more than a dozen sites set up so far, ranging from Dallas (Sunday) to Chicago (Dec. 14) to Tampa (March 29) to Los Angeles (TBD) and the Boston area (April 27), where it will be held at Gillette Stadium, home of the New England Patriots. The organization has already partnered with the NFL on flag football initiatives and programs. The numbers have been through the roof, with engagement on social media platforms increasing by 86% since flag football was announced as an Olympic invitational sport in October 2023. The participation of boys and girls ages 6 to 17 in flag football last year peaked at more than 1.6 million, according to USA Football research. "We pride ourselves on elevating the gold standard across the sport," said Eric Mayes, the managing director of the high performance and national teams for USA Football. "We want to be the best in the world — and stay the best in the world." Flag football was one of five new sports added to the LA28 program. The already soaring profile of American football only figures to be enhanced by an Olympic appearance. Imagine, say, a few familiar faces take the field, too. Perhaps even NFL stars such as Tyreek Hill or Patrick Mahomes, maybe even past pro football greats donning a flag belt for a country to which they may have ties. Soon after flag football's inclusion, there was chatter of NFL players possibly joining in on the fun. Of course, there are logistical issues to tackle before their inclusion at the LA Olympics, which open July 14, 2028. Among them, training camp, because the Olympics will be right in the middle of it. The big question is this: Will owners permit high-priced players to duck out for a gold-medal pursuit? No decisions have yet been made on the status of NFL players for the Olympics. For now, it's simply about growing the game. There are currently 13 states that sanction girls flag football as a high school varsity sport. Just recently, the Pittsburgh Steelers and Philadelphia Eagles helped pave the way to get it adopted in Pennsylvania. Around the world, it's catching on, too. The women's team from Japan took third at the recent word championships, while one of the best players on the planet is Mexico quarterback Diana Flores. "Could flag football globally become the new soccer? That's something to aspire to," said Stephanie Kwok, the NFL's vice president of flag football. This type of flag football though, isn't your Thanksgiving Day game with family and friends. There's a learning curve. And given the small roster sizes, versatility is essential. Most national team members need to be a version of Colorado's two-way standout and Heisman hopeful Travis Hunter. Forget bump-and-run coverage, too, because there's no contact. None. That took some adjusting for Mike Daniels, a defensive back out of West Virginia who earned a rookie minicamp invitation with the Cleveland Browns in 2017. "If a receiver is running around, I'm thinking, 'OK, I can kind of bump him here and there and nudge him,'" Daniels explained. "They're like, 'No, you can't.' I'm just like, 'So I'm supposed to let this guy just run?!' I really rebelled at the idea at first. But you learn." The competition for an Olympic roster spot is going to be fierce because only 10 players are expected to make a squad. The best 10 will earn it, too, as credentials such as college All-American or NFL All-Pro take a backseat. "I would actually love" seeing NFL players try out, said Daniels, who's also a personal trainer in Miami. "I'm not going to let you just waltz in here, thinking, 'I played NFL football for five years. I'm popular. I have a huge name.' I'm still better than you and I'm going to prove it — until you prove otherwise." Around the house, Bruce Mapp constantly swivels his hips when turning a hallway corner or if his daughter tries to reach for a hug. It's his way of working on avoiding a "defender" trying to snare the flag. That approach has earned the receiver out of Coastal Carolina four gold medals with USA Football. The 31-year-old fully plans on going for more gold in Los Angeles. "You grow up watching Usain Bolt (win gold) and the 'Redeem Team' led by Kobe Bryant win a gold medal, you're always thinking, 'That's insane.' Obviously, you couldn't do it in your sport, because I played football," said Mapp, who owns a food truck in the Dallas area. "With the Olympics approaching, that (gold medal) is what my mind is set on." It's a common thought, which is why everything — including talent camps — starts now. "Everybody thinks, 'Yeah, the U.S. just wins,'" Daniels said. "But we work hard all the time. We don't just walk in. We don't just get off the bus thinking, 'We're going to beat people.'" Get local news delivered to your inbox!
Malcolm Wilson to Retire as CEO of GXO Logistics in 2025Stewart Announces Contract Extension for CEO Fred Eppinger
Diverse group of CSU seniors look to close out regular season on a positive noteMetagenomi Presents Highly Specific and Efficient Genome Editing Tools at Nature Conference ...
Wheel of Fortune contestants regularly whiff their bonus puzzles, but the winner of Monday, December 2’s episode was in for a particularly heartbreaking spin. Her additional letter choices added ZERO letters to a tricky puzzle, which was even more shocking since she had the advantage of a Wild Card wedge. The tough break involved Kelsey Sowders, a mom of three and steak/wine savant from Tomball, Texas. After an astounding performance, she proceeded to the coveted bonus round, having racked up $40,398 in cash, a prize trip to Japan, and the elusive Wild Card. This meant she got to pick five additional letters instead of four, which often spells success. Selecting “What are You Doing?” as her category, with the off-side support of her eldest son Grant and husband, Sowders joined Ryan Seacrest center stage. She landed on the star portion of the wheel, and the host assured, “Perhaps it’s good luck.” “I hope so,” Sowders said. The two-word puzzle read as “_EE_N_’ ‘_ _ S_,’ and she chose an additional “MFDA,” and H.” However, Vanna White didn’t move an inch as the letter choices were useless, making the puzzle very difficult. “Oh no!” Sowders exclaimed in disappointment. She went through the five stages of grief, staring in disbelief, blowing a raspberry in frustration, and recollecting herself. Seacrest wished her the best, “You’re doing great so far tonight.” But the cruel twist of fate left Sowder unable to solve the puzzle under the 10-second timer, which ended up being “KEEPING BUSY.” She was close, even able to get the first word, but nowhere near the second. “Oh no!” Sowders exclaimed once more as the full puzzle was displayed. Then, cutting back to the contestant and Seacrest, the second dagger came. The host revealed from his prize card contained $75,000 and she hid her face from it. “I don’t want to see that,” she said as Seacrest winced at the camera. “Don’t worry,” the host told her as she emotionally recovered and told him, “That’s okay.” The game show shared the big miss on YouTube, where fans expressed their shock and empathized with the player’s reaction. “That was a tough one. I didn’t get it either. Props to her for getting the first word right, but that second word was tricky as hell. I’m glad she’s not walking away empty-handed, though. She still won up until that point and nobody can take that from her,” one fan wrote. “Impossible without the right letter choices. Been a few of those this season,” wrote another. “If she would have won, she would have won over $100,000 cash without actually landing on the envelope! That’s really disappointing. Also, the fact that she had 5 letters but didn’t get a single one?! Should I be disappointed or impressed?” asked a third. “Ouch!” wrote a fourth. “You don’t see $75,000 all that often!” Meanwhile , Seacrest had huge shoes to fill replacing the legendary Pat Sajak after four decades for Season 42. His debut month was the strongest ratings month for WoF in the past three years, and viewers were already treated to a viral moment (via a round of sausage) . That said, there have been some questionable host moments according to fans. In September, Seacrest suffered what fans dubbed his “first blooper” , involving a delayed reaction to rewarding a bonus round. Fans also called out the host for ruling against another player before the timer was up. Most controversially, fans recently called out the host for not reminding a player to pick a letter , leading to him losing the game in a misunderstanding and by a mere $147. Another puzzling pattern has emerged, which is that no player has won the bonus round in a full week , many fans blaming the players, not the host. As for Sowders, another contestant recently botched their bonus puzzle in a similar way after choosing poor letters, but in that instance, they didn’t have the boost of the Wild Card wedge. Wheel of Fortune , Weeknights, Check your local listings More Headlines:Trump hit by shock withdrawal
As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life – learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms. These harms aren’t obvious or immediate. They’re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming a significant threat to privacy, equality, autonomy and safety. AI systems are embedded in nearly every facet of modern life. They suggest what shows and movies you should watch, help employers decide whom they want to hire, and even influence judges to decide who qualifies for a sentence. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm? The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technology, and I’ve outlined a legal framework to do just that. Slow burns One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don’t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people — often without their knowledge — and use this data to shape decisions affecting people’s lives. Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people. Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and compile profiles of their political beliefs, professional affiliations and personal lives. The data collected is used in systems that make consequential decisions — whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide. Worse, their addictive design traps teenagers in cycles of overuse, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it’s too late — your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge. This is what I call “intangible, cumulative harm”: AI systems operate in the background, but their impacts can be devastating and invisible. Why regulation lags behind Despite these mounting dangers, legal frameworks worldwide have struggled to keep up. In the United States, a regulatory approach emphasizing innovation has made it difficult to impose strict standards on how these systems are used across multiple contexts. Courts and regulatory bodies are accustomed to dealing with concrete harms, like physical injury or economic loss, but algorithmic harms are often more subtle, cumulative and hard to detect. The regulations often fail to address the broader effects that AI systems can have over time. Social media algorithms, for example, can gradually erode users’ mental health, but because these harms build slowly, they are difficult to address within the confines of current legal standards. Four types of algorithmic harm Drawing on existing AI and data governance scholarship, I have categorized algorithmic harms into four legal areas: privacy, autonomy, equality and safety. Each of these domains is vulnerable to the subtle yet often unchecked power of AI systems. The first type of harm is eroding privacy. AI systems collect, process and transfer vast amounts of data, eroding people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, effectively turning mass surveillance into the norm. The second type of harm is undermining autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party’s interests, subtly shaping opinions, decisions and behaviors across millions of users. The third type of harm is diminishing equality. AI systems, while designed to be neutral, often inherit the biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color. The fourth type of harm is impairing safety. AI systems make decisions that affect people’s safety and well-being. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they can still cause harm, such as social media algorithms’ cumulative effects on teenagers’ mental health. Because these cumulative harms often arise from AI applications protected by trade secret laws, victims have no way to detect or trace the harm. This creates a gap in accountability. When a biased hiring decision or a wrongful arrest is made due to an algorithm, how does the victim know? Without transparency, it’s nearly impossible to hold companies accountable. Closing the accountability gap Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle. Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time. Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology. As AI systems become more widely used in critical societal functions – from health care to education and employment – the need to regulate harms they can cause becomes more pressing. Without intervention, these invisible harms are likely to continue to accumulate, affecting nearly everyone and disproportionately hitting the most vulnerable. With generative AI multiplying and exacerbating AI harms, I believe it’s important for policymakers, courts, technology developers and civil society to recognize the legal harms of AI. This requires not just better laws, but a more thoughtful approach to cutting-edge AI technology – one that prioritizes civil rights and justice in the face of rapid technological advancement. The future of AI holds incredible promise, but without the right legal frameworks, it could also entrench inequality and erode the very civil rights it is, in many cases, designed to enhance. Sylvia Lu is a Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.