Tech firms: We’re trying to make our sites hostile to terrorists

Enlarge
Jack Taylor/Getty Images

In the aftermath of the London attack, Facebook, Google, and Twitter have insisted that they already work closely with the UK government to flush out the sharing of extremist content—as fresh calls to crack down on the Internet and end-to-end crypto once again surfaced following a terror atrocity.

It comes after prime minister Theresa May said on Sunday that terrorist ideology has a “safe space” online, and—on a day when campaigning for the general election was supposedly suspended—she trotted out many of the political pledges in the Tory manifesto, just 12 hours after the attacks in London Bridge and Borough Market took place.

Chief among those vows that are likely to worry tech firms, some of which offer services that come loaded with end-to-end encryption, was the PM’s call for the regulation of “cyberspace to prevent the spread of extremist terrorism planning.”

Three men deliberately drove a hire van into pedestrians on London Bridge on Saturday night. They then went on a knife-wielding rampage through nearby Borough Market, targeting anyone in their path. Armed police were on the scene and killed the men within eight minutes of the start of the attack.

In her speech on Sunday, May—who flagged up four areas of concern relating to both the online and offline world—said: “We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the Internet and the big companies that provide Internet-based services provide.”

However, Facebook disputed the claim that free content ad networks were a breeding ground for terrorists. The company’s policy director, Simon Milner, said:

We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists.

Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it—and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.

Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.

Twitter’s public policy head, Nick Pickles, echoed Facebook’s statement. “Terrorist content has no place on Twitter,” he said.

“We continue to expand the use of technology as part of a systematic approach to removing this type of content. We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society, and academia.”

Google agreed. “We are committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online,” it said.

“We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges.”

May claimed on Sunday that “enough is enough” when it comes to dealing with terrorism and extremism in the UK—policy areas that she was directly responsible for during her six-year long stint as home secretary.

“Putting in place the right solutions to combating the misuse of online platforms is just one part of the jigsaw in tacking extremism,” said Anthony Walker, deputy chief of industry body techUK. “These are highly complex, challenging issues and tech companies are committed to playing their part, working within a clear legal framework and in full recognition of the seriousness of these issues.”

The Internet Services Providers’ Association—a lobby group which represents many of Britain’s teclos including BT, Sky, and Virgin Media—told Ars that the “government and the security services already have substantial powers in this area and the Internet industry complies with the laws and regulations in the UK and elsewhere.” It added:

When considering the need for more powers to regulate the Internet, policymakers need to be fully aware of the effectiveness of existing powers, resources to deal with the threat and the impact any new measures may have, including unintended consequences that could undermine our defences—for instance the weakening of cyber security.

Technology is only one part of the wider approach to dealing with radicalisation, which is an complex international challenge that requires an international response.

This post originated on Ars Technica UK

Go to Source

Technology firms vow to continue work to remove terror content

Internet companies have reiterated their commitment to help combat online extremism after Theresa May accused big tech firms of giving terrorist ideology “the safe space it needs to breed”.

The prime minister levied the criticism as she reacted to the London terror attack and called for more to be done “to prevent the spread of extremist and terrorism planning”.

Mrs May said: “We cannot allow this ideology the safe space it needs to breed”.

She added: “Yet that is precisely what the internet, and the big companies that provide internet-based services provide.

In response,Facebook said it condemned recent attacks and wanted the social media platform to be “a hostile environment for terrorists”.

In a statement, Simon Milner, director of policy at Facebook, said: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform.

“As soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.”

Nick Pickles, UK head of public policy at Twitter, said: “Terrorist content has no place on Twitter.

“We continue to expand the use of technology as part of a systematic approach to removing this type of content.

“We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia.”

Twitter also says it shut down 376,890 accounts linked to terrorism in the last six months of 2016.

Meanwhile, a Google spokesman said: “We are committed to working in partnership with the Government and NGOs to tackle these challenging and complex problems, and share the Government’s commitment to ensuring terrorists do not have a voice online.

“We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges.”

The Tory manifesto has also called for a much tougher approach to regulation on the internet.

Proposals include tougher sanctions for companies that fail to remove illegal content, as well as legislating for an industry-wide levy on social media companies to counter harmful activity online.

Digital campaigners the Open Rights Group said it was disappointing Mrs May had focused on internet regulation and encryption in the aftermath of the London Bridge attack.

“This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.

“But we should not be distracted: the internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused.

“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming.”

Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, said:

“Few people radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.

“In other words, May’s statement may have sounded strong but contained very little that is actionable, different, or new.”

Go to Source

Kenyan firms slow in enforcing Access to Information Act

Economy

A Base Titanium official displays minerals produced at their Kwale factory. The firm says it is not subject to the information access law.  FILE PHOTO | NMG

A Base Titanium official displays minerals produced at their Kwale factory. The firm says it is not subject to the information access law. FILE PHOTO | NMG 

Companies in Kenya have been caught flat footed in implementing new anti-secrecy laws and are now scrambling to comply.

The Ombudsman has already warned private companies that they face being barred from doing business with the government if they fail to disclose information as required in the Access to Information Act which took effect on September 21, 2016.

Top on the list of what firms are required to do to comply with the Act is appointing a dedicated information access officer — who will process requests for information by the public.

Whilst the law by default identifies the CEO of a company as the information access officer, best practice is to assign an officer to receive and process requests for data.

Companies as well as State agencies are also required to have a tab on their website clearly marked Access to Information where members of the public can lodge requests online.

The Business Daily reached out to more than two dozen companies to enquire on their preparedness and compliance with the Access to Information Act.

Entities obliged to make information public on request include those which receive taxpayer funds, companies which provide public services such as telcos and banks, and those exploiting natural resources such as oil and mineral wealth.

Private firms found guilty of failing to provide necessary information to wananchi face a Sh0.5 million fine and an embargo from transacting business with the national or county governments.

Millers, for example, who are currently enjoying taxpayer-funded subsidies amounting to Sh6 billion to buy cheap maize are now subject to the freedom of information rules.

Base Titanium, which in February 2014 began exports of titanium ore — rutile, ilmenite and zircon — from Kwale, said they were not subject to the information laws.

“We are not aware of any section of this legislation that is applicable to us,” said Joe Schwarz, Base Titanium general manager in charge of external affairs and development.
The Act is meant to operationalise Kenya’s Article 35 of the Constitution which provides for access to information held by the government as well as by another persons or entities which is “required for the exercise or protection of any right or fundamental freedom.”

Anti-secrecy laws

This means that citizens can file a request for information seeking details of the value of the mineral wealth being mined, the profits they are raking, and how they are sharing earnings with the host community.

Tullow Oil, which announced Kenya’s first commercial oil discovery in March 2012, said its press and communication team was in charge of handling access to information requests.

CEOs of private firms which fail to publicly name their designated information officers face a Sh300,000 fine and a six-month jail term for obstructing access to data.

“Members of the public can put in requests for information using the contact details indicated on http://www.tullowoil.com/ or by sending an email to: infokenya@tullowoil.com. The communication team is tasked with responding to AITA requests,” Tullow Oil told the Business Daily.

Under Kenya’s anti-secrecy laws, anyone can petition Tullow Oil to make public the production sharing contract it has signed with the government of Kenya. Article 19, a body which fights to promote freedom of expression and information, said the Kenyan government has been slow to implement the new law.

Henry Maina, Article 19 regional director in charge of Eastern Africa, said public bodies must be at the forefront of implementing the access to information laws.

“There is slow implementation of the law. The executive is not owning the law. The ministry is yet to gazette regulations on cost, time and designation of bodies as either public or private,” Mr Maina said in an interview.

He said that individuals and firms have gone to court to enforce their rights to access information, helping set precedent on this fundamental right.

Nairobi Law Monthly, backed by lawyer Ahmednasir Abdullahi, took Nairobi bourse listed electricity generator KenGen #ticker:KEGN to court in 2011 for failure to avail information relating to a disputed tender for drilling geothermal wells.

Even though judge Mumbi Ngugi ruled that the rights of Nairobi Law Monthly were not breached, she asserted that companies have the obligation to make information public upon request.

“This petition succeeds to the extent that I have found that the 1st respondent (KenGen) has an obligation, on the request of a citizen, to provide access to information under Article 35(1)(a) of the Constitution,” ruled judge Ngugi on May 13, 2013.

The High Court also clarified that the right to information can only be enjoyed by natural persons, and not corporate bodies. “A natural person who is a citizen of Kenya is entitled to seek information under Article 35(1)(a) from the respondent, and the respondent, unless it can show reasons related to a  legitimate aim for not disclosing such information, is under a constitutional obligation to provide the information sought.”

Siginon, a logistics firm, said it does not fall in the category of firms that should provide information to the public.

“Siginon does not qualify as a private body and as such the act does not apply to it,” the company said.

In the US, a freedom of information access request was made regarding a transport firm which revealed that the company had been fined $155,000 (Sh16 million) between 2004 and 2006 for logbook violations.

Unilever, a fast-moving consumer goods maker, said provisions in the access to information law do not apply to it.

Instead, the firm said, its customer care channels process information requests.

“Unilever is not a public entity and hence there is no obligation for the said designation,” the firm told the Business Daily.

“As is the practice with many FMCG companies, we have dedicated customer service channels where we receive and address all consumer queries.”

KCB, Kenya’s largest bank, was yet to respond to our queries three weeks later despite promising to do so. “Our legal team is studying it and will provide a response tomorrow,” the bank said in an email response dated May 11, 2017.

Enquiries to Safaricom #ticker:SCOM, Airtel, Telkom Kenya, Equity Bank #ticker:EQTY, Bidco, Nakumatt, Tuskys, University of Nairobi, Coca Cola, and Kenya Power #ticker:KPLC did not yield any responses regarding their preparedness on the access to information laws.

The Ombudsman; the body charged with the responsibility of enforcing compliance with the freedom of information law, has promised to fully enforce the legislation.

Go to Source

London attack: Tech firms criticise PM’s condemnation

Prime Minister Theresa May has been warned that her promise to tighten regulation on tech firms after the London attacks will not work.

Mrs May said areas of the internet must be closed because tech giants provided a “safe space” for terrorist ideology.

But the Open Rights Group said social media firms were not the problem, while an expert in radicalisation branded her criticism “intellectually lazy”.

Twitter, Facebook and Google said they were working hard to fight extremism.

Google (which owns Youtube) Facebook (which owns WhatsApp) and Twitter were among tech companies already facing pressure to tackle extremist content – pressure that intensified on Sunday.

Mrs May said: “We cannot allow this ideology the safe space it needs to breed.

“Yet that is precisely what the internet, and the big companies… provide.”

On ITV’s Peston on Sunday, Home Secretary Amber Rudd said an international agreement was needed for social media companies to do more to stop radicalisation.

“One (requirement) is to make sure they do more to take down the material that is radicalising people,” Mrs Rudd said.

“And secondly, to help work with us to limit the amount of end-to-end encryption that otherwise terrorists can use,” she said.

But the Open Rights Group, which campaigns for privacy and free speech online, warned that politicians risked pushing terrorists’ “vile networks” into the “darker corners of the web” by more regulation.

“The internet and companies like Facebook are not the cause of hate and violence, but tools that can be abused.

“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming,” Open Rights said.

Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, was also critical of Mrs May.

He wrote on Twitter: “Big social media platforms have cracked down on jihadist accounts, with result that most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram.

“This has not solved problem, just made it different… moreover, few people (are) radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.”

However, Dr Julia Rushchenko, a London-based research fellow at the Henry Jackson Centre for Radicalisation and Terrorism, told the BBC that Mrs May was right, and that more could be done by tech giants to root out such content.

She felt that the companies erred on the side of privacy, not security. “We all know that social media companies have been a very helpful tool for hate preachers and for extremists,” Dr Rushchenko said.

The online world had been a recruiting aid for foreign fighters, and social media needed “stricter monitoring”, both by government agencies and by third party groups that have been created to flag up extremist content.

‘No place on our platform’

However, the major social media firms said on Sunday that they were working hard to rid their networks of terrorist activity and support.

Facebook said: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement. “

Google said it was “committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online”.

It said it was already working on an “international forum to accelerate and strengthen our existing work in this area” and had invested hundreds of millions of pounds to fight abuse on its platforms.

Twitter said “terrorist content has no place on” its platform.

“We continue to expand the use of technology as part of a systematic approach to removing this type of content.

“We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia.”


Analysis: Joe Lynam, BBC business correspondent

Calling for technology companies to “do more” has become one of the first responses by politicians after terror attacks in their country.

Theresa May’s comments on that subject were not new – although the tone was.

She has already proposed a levy on internet firms, as well as sanctions on firms for failing to remove illegal content, in the Conservative party manifesto published three weeks ago.

Given that 400 hours of videos are uploaded onto Youtube every minute, and that there are 2 billion active Facebook users, clamping down on sites which encourage or promote terror needs a lot of automatic detection – as well as the human eye and judgement.

Technology companies such as Microsoft, Google, Twitter and Facebook are all part of an international panel designed to weed out and prevent terror being advocated worldwide.

That involves digitally fingerprinting violent images and videos as well as sharing a global database of users who may be extremist.

Go to Source

Tech firms criticise PM’s condemnation

Prime Minister Theresa May has been warned that her promise to tighten regulation on tech firms after the London attacks will not work.

Mrs May said areas of the internet must be closed because tech giants provided a “safe space” for terrorist ideology.

But the Open Rights Group said social media firms were not the problem, while an expert in radicalisation branded her criticism “intellectually lazy”.

Twitter, Facebook and Google said they were working hard to fight extremism.

Google (which owns Youtube) Facebook (which owns WhatsApp) and Twitter were among tech companies already facing pressure to tackle extremist content – pressure that intensified on Sunday.

Mrs May said: “We cannot allow this ideology the safe space it needs to breed.

“Yet that is precisely what the internet, and the big companies… provide.”

On ITV’s Peston on Sunday, Home Secretary Amber Rudd said an international agreement was needed for social media companies to do more to stop radicalisation.

“One (requirement) is to make sure they do more to take down the material that is radicalising people,” Mrs Rudd said.

“And secondly, to help work with us to limit the amount of end-to-end encryption that otherwise terrorists can use,” she said.

But the Open Rights Group, which campaigns for privacy and free speech online, warned that politicians risked pushing terrorists’ “vile networks” into the “darker corners of the web” by more regulation.

“The internet and companies like Facebook are not the cause of hate and violence, but tools that can be abused.

“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming,” Open Rights said.

Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, was also critical of Mrs May.

He wrote on Twitter: “Big social media platforms have cracked down on jihadist accounts, with result that most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram.

“This has not solved problem, just made it different… moreover, few people (are) radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.”

However, Dr Julia Rushchenko, a London-based research fellow at the Henry Jackson Centre for Radicalisation and Terrorism, told the BBC that Mrs May was right, and that more could be done by tech giants to root out such content.

She felt that the companies erred on the side of privacy, not security. “We all know that social media companies have been a very helpful tool for hate preachers and for extremists,” Dr Rushchenko said.

The online world had been a recruiting aid for foreign fighters, and social media needed “stricter monitoring”, both by government agencies and by third party groups that have been created to flag up extremist content.

‘No place on our platform’

However, the major social media firms said on Sunday that they were working hard to rid their networks of terrorist activity and support.

Facebook said: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement. “

Google said it was “committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online”.

It said it was already working on an “international forum to accelerate and strengthen our existing work in this area” and had invested hundreds of millions of pounds to fight abuse on its platforms.

Twitter said “terrorist content has no place on” its platform.

“We continue to expand the use of technology as part of a systematic approach to removing this type of content.

“We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia.”


Analysis: Joe Lynam, BBC business correspondent

Calling for technology companies to “do more” has become one of the first responses by politicians after terror attacks in their country.

Theresa May’s comments on that subject were not new – although the tone was.

She has already proposed a levy on internet firms, as well as sanctions on firms for failing to remove illegal content, in the Conservative party manifesto published three weeks ago.

Given that 400 hours of videos are uploaded onto Youtube every minute, and that there are 2 billion active Facebook users, clamping down on sites which encourage or promote terror needs a lot of automatic detection – as well as the human eye and judgement.

Technology companies such as Microsoft, Google, Twitter and Facebook are all part of an international panel designed to weed out and prevent terror being advocated worldwide.

That involves digitally fingerprinting violent images and videos as well as sharing a global database of users who may be extremist.

Go to Source

Garages fault insurance firms for supply of spares

By STELLA CHERONO
More by this Author

Insurance firms have started to supply spare parts in a bid to cut cost of repairs, which has seen them incur hefty bills.

However, garage owners see that as a ploy to eat into their profits.

Repairers say that, apart from crashing in on their profits, insurance companies buy the cheapest spares possible and supply them to the garages, forcing them to use poor quality parts at the expense of the clients’ safety and interests.

Under their umbrella body Kenya Motor Repairers Association (Kemra), the repairers also claim that assessors—automotive engineers with the mandate of negotiating repair costs on behalf of insurance companies—are forced to authorise substandard repairs with the fear of losing business and intimidation the order of the day.

CAUSED DELAYS

Kemra chief executive officer Mercy Kyande said the move has caused delays in repairs and congested garages.

UAP Insurance, for example, in its service level agreement, says the repairer shall allow for a grace period of 60 days after submitting repair estimates to ascertain if the vehicle is repairable and that, during that time, no storage charges shall accrue. The agreement requires the repairer to offer storage for a motor vehicle for 60 days, meaning they can stay with the vehicle for four months.

It also states that the company may supply parts to the repairer, who shall accept to use the parts, and that UAP will pay the garage the handling fee at a rate of 20 per cent on the parts, a deal Kemra says will compromise on quality.

“We can no longer guarantee clients that the repairs we conduct are of high quality because some insurers supply repairers with very low quality spare parts,” said Ms Kyande. “Despite several warnings to these companies by the repairers association, the insurance companies still insist that they must cut costs.”

DISPELLED ALLEGATIONS

Ms Kyande said during an interview that the motor vehicle owners do not get value for the money they pay insurance companies.

Though confirming that some insurance firms supply parts to repairers, Association of Kenya Insurers (AKI) chief executive officer Tom Gichuhi however dispelled the allegations as business politics.

He said motorists had not raised such issues with insurers.

“There are three options for compensation: Repair, replacement and payment of cash,” Mr Gichuhi explained, “and the policy document does not prescribe how the repairs should be done.

“The clients have always been happy, and so the repairer should have an issue only if they are not paid their money.”

He said the repairer’s job was to repair and not to supply parts, adding that there could be vested interests by Kemra members who may be owning spare part shops.

RIDICULOUS CLAIMS

Ms Kyande however said insurers mostly used suppliers who are little known to the industry and so the claims were ridiculous.

In a letter to APA Insurance Company dated March, 14, 2017, Kenra said the move was a variation of the repair contract from one where the repairer supplies parts to one where the insurer sources and delivers parts to the repairer, who then only provides the technical expertise of repair.

“This move is borrowed from the developed economies but we note that the model needs to be adopted in a holistic manner and not on piecemeal basis,” said Ms Kyande.

She said the contract between the insurers and repairers naturally and legally required both parties to engage and, when there is a need, strike a compromise.

OBSERVE ETHICS
“We believe insurance companies conduct their business within the law and that they adopt best practice and observe ethics in their operations,” said Ms Kyande. “They must win the trust of the public and their customers, who include our members, by portraying utmost good faith at all times.”

Kemra said the motor vehicle labour-only contract usually puts into consideration factors such as actual man-hours hourly rates, quality of parts, timeliness and full supply of parts.

Insurance Regulatory Authority communications manager Noella Mutanda said they had not received complaints from repairers—despite letters in our possession showing the contrary.

K-State, KU athletics prepare for increased security under concealed carry law

Increased security checkpoints have become a way of life at airports and public buildings, and college sports fans of Kansas State University and the University of Kansas will face higher scrutiny as changes in the state concealed carry law take effect this summer.

As of July 1, under a law passed four years ago by the Kansas Legislature, anyone over the age of 21 will be allowed to carry a firearm onto university grounds in Kansas. Firearms may be prohibited in specified facilities, but that requires the implementation of Adequate Security Measures (ASM) involving metal detectors and/or the use of hand wands.

“The law has been in effect but the universities had a four-year exemption to plan and put policies and procedures in place before the law affected the campuses,” said Casey Scott, senior associate athletics director for operations and event management at K-State. “That exemption expires on July 1.

“We had made a determination, and I believe KU and Wichita State in the same regard, that we would not want weapons concealed carried into our athletics events in our football stadium and in Bramlage Coliseum at the higher-level attended events. That’s how we are proceeding at this point.”

For the 2017-18 football and basketball seasons, K-State will have metal detectors at all public gates at Bill Snyder Family Stadium and Bramlage Coliseum, as will KU at Memorial Stadium and Allen Fieldhouse.

The price tag for the equipment and the staff to operate the system is hefty.

“We’re estimating the purchase of about 73 metal detectors to cover our football stadium and that total cost will be roughly $420,000,” Scott said. “In addition, we’ll have to hire an outside security group to provide the manpower. Using the information the consultants brought us, we estimate it will take at a football game up to 200 additional people to provide the screening and operate the metal detectors. That will probably cost us around $210,000 for the football season.

“We would buy metal detectors that are portable and use the same screening and metal detectors at Bramlage for men’s and women’s basketball. We’ll need 22 to 25 in Bramlage and we’ll probably have to have about 50 security people per game helping us man those. The personnel cost for basketball season to cover roughly 36 men’s and women’s games would be $130,000, so that’s close to $800,000 in this first year for equipment and manpower, and in subsequent years our cost would be primarily the labor to operate the machines.”

Jim Marchiony, associate AD/public affairs at KU, estimates the cost to implement security measures at events with an anticipated attendance of 5,000 or more at approximately $1 million.

“Since this subject has come up, we’ve said the first priority is safety of the fans,” Marchiony said. “We’re not going to get specific about the security measures but I’ll tell you that metal detectors and wanding will be part of the process.”

With four years to prepare for this process, Scott said the money to cover the cost has been budgeted and won’t be placed on the fans with sudden ticket price increases. The cost also has been budgeted at KU.

“Well, I think it’s going to be budgeted in a separate way,” Marchiony said. “We’ll look at the whole budgeting process but the cost will not be passed down to the fans.”

Scott said wanding is less expensive but also less efficient than walk-through metal detectors.

“It is highly taxing on people to wand other people,” Scott said. “You’re talking 50,000 people coming into a (football) game and physically you can’t do it fast enough. As we explored that and worked with consultants and security firms, we made the determination that to make it feasible to get the amount of people into the facility in a reasonable amount of time, the only way you can do it is through metal detectors.”

K-State began the process of increased security during the past 2016-17 sports season with the implementation of a clear bag policy for all ticketed athletic events. That limits carry-in bags to one-gallon clear plastic or vinyl bags and prohibits items such as coolers, briefcases, fanny packs and computer bags.

The KU athletic department will institute a clear bag policy for the 2017-18 season along with metal detectors.

“We’ve been planning this and while we didn’t solidify plans, yet, we were getting ready for implementation in case the law didn’t change and we were also waiting for direction from the university and the Board of Regents, also,” Marchiony said.

“Communication is important any time you make a change like this that has the potential to affect so many people. We have sent messages directly to our donors and season ticket holders explaining what is going to happen. We had football select-a-seat very recently and gave information at that time. We’re going to take every opportunity to reinforce the message so hopefully there won’t be anybody who arrives at the first football game who doesn’t know what the policy is.”

The acceptance of clear bag policy by Wildcat fans went as smoothly as could have been expected.

“The clear bag policy was always step one in upgrading our security measures knowing that the coming season (2017-18) we likely would have to go to metal detectors, unless there was a change in the law,” Scott said. “What the clear bag policy was intended to do was help speed people through the screening process as they got to the metal detectors. Our intention was to have a year in which to communicate to our fans what the clear bag policy was about in terms of security benefits and speeding people into the facility.

“There has been a learning curve, of course, but I would say as we got through football and basketball that 90 to 95 percent of our fans had adapted to the clear bag policy. There were folks who did not like it, obviously, and there were push backs from families that want to bring in large diaper bags. We get that, but we all knew going into the second phase of the security that we were trying to help acclimate people to what’s going to be a higher level of security.”

Baylor is the only Big 12 school that has used metal detectors.

“They went to using metal detectors at their football stadium this past year,” Scott said. “During our game there we viewed their operation and got quite a bit of good information and tips on what to do and what not to do as they went through their first year.”

As fans adjust to the screening process, the biggest adjustment may be to shut down tailgates parties earlier to make their way through the stadium gates.

“This level of security is fairly common and accepted now at all professional venues,” Scott said. “I believe if you go to a Chiefs game or a Royals game, you’re going to walk through a metal detector. I know when you go to the Sprint Center, you walk through a metal detector.

“It will be an adjustment for our fans at K-State and they’re going to probably have to move toward the gates much sooner than they used to because it will take longer to get in the gate.”

Fan safety is the priority of the increased security measures.

“I think maybe it’s natural the first time or two that you go to an event, it would be natural to wonder if anybody is carrying firearms,” Marchiony said, “but I think as we go forward that feeling will decrease.”

Go to Source

Management style key to failure or success of firms

Home

A business with good governance often reports higher profits. PHOTO | FOTOSEARCH

A business with good governance often reports higher profits. PHOTO | FOTOSEARCH 

Corporate governance has existed for centuries. In 1776, Adam Smith stated that directors acting as agents of shareholders could not be expected to be as diligent shareholders, thereby separating interests of shareholders from those of directors.

Shareholders are owners of the business and risk takers therefore more interested in good governance while directors do not own the business and hence do not bear a lot of risk.

Corporate governance adopts company law as its foundational law and therefore developments in company law over the years have had a direct impact on corporate governance.

The modern concept of a company comes from laws developed from the 19th century whose key objective was to have a legal entity separate from its owners but also having the rights of a legal person.

One definition of corporate governance has been the relationship between directors and shareholders. Good governance therefore is the management of the relationship between shareholders and the board so as to minimise conflict.

On the one hand, shareholders should understand the board’s mandate and allow it to exercise its duties for effectiveness while on the other hand the board should understand the genuine interest of shareholders in securing their investments.

A second definition of corporate governance is the interrelation between management, the board of directors, and shareholders. Corporate governance is deemed to be management and balancing of all the interests above so as to maintain an optimum relationship.

Corporate governance has also been defined in terms of the structure and framework.

One school of thought defines it as the structure through which a company’s objectives are achieved, while another states that it is a framework through which the relationship between players is managed, while yet another states that it dictates corporate behavior as it includes disclosures and management controls.

It’s the structure through which players can pursue most effectively the goals of the corporation. In this school of thought corporate governance is defined as a system, a structure or a framework.

It is important for every company to have in place sound corporate governance practices so as to cater for the various interests represented.

In Kenya corporate governance is contained in the Companies Act while governance of other associations such as partnerships and NGOs is contained in the mother laws. A business with good governance will report higher profitability.

The business will also have a good reputation and attract investors and other stakeholders.

It will also reduce risks such as non-compliance with statutory provisions. A soundly governed company will take into account third-party interests such as employees, therefore attracting a pool of talented staff. When setting up a business think through its governance structure.

Go to Source

London attack: U.K. PM lays blame on internet firms for spread of extremism but is she right?

DETROIT — In the wake of Britain’s third major attack in three months, Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online.

The British Prime Minister also called out internet companies a day after a tragic attack at London Bridge which left seven people dead.

“We cannot allow this ideology the safe space it needs to breed – yet that is precisely what the internet, and the big companies that provide internet-based services provide,” May said Sunday.

Story continues below

Here’s a look at extremism on the web, what’s being done to stop it and what could come next.

Q. What are technology companies doing to make sure extremist videos and other terrorist content doesn’t spread across the internet?

A. Internet companies use technology plus teams of human reviewers to flag and remove posts from people who engage in extremist activity or express support for terrorism.

READ MORE: B.C. woman identified as Canadian who died in London terror attack

Google, for example, says it employs thousands of people to fight abuse on its platforms. Google’s YouTube service removes any video that has hateful content or incites violence, and its software prevents the video from ever being reposted. YouTube says it removed 92 million videos in 2015; 1 percent were removed for terrorism or hate speech violations.

Facebook, Microsoft, Google and Twitter teamed up late last year to create a shared industry database of unique digital fingerprints for images and videos that are produced by or support extremist organizations. Those fingerprints help the companies identify and remove extremist content. After the attack on Westminster Bridge in London in March, tech companies also agreed to form a joint group to accelerate anti-terrorism efforts.

WATCH: Canadian dance duo describe London terror attack: ‘We were scared for our lives’



Twitter says in the last six months of 2016, it suspended a total of 376,890 accounts for violations related to the promotion of extremism. Three-quarters of those were found through Twitter’s internal tools; just 2 percent were taken down because of government requests, the company says.

Facebook says it alerts law enforcement if it sees a threat of an imminent attack or harm to someone. It also seeks out potential extremist accounts by tracing the “friends” of an account that has been removed for terrorism.

Q. What are technology companies refusing to do when it comes to terrorist content?

A. After the 2015 mass shooting in San Bernardino, California, and again after the Westminster Bridge attack, the U.S. and U.K. governments sought access to encrypted — or password-protected — communication between the terrorists who carried out the attack. Apple and WhatsApp refused, although the governments eventually managed to go around the companies and get the information they wanted.

READ MORE: London attack: Police officer faced 3 knife-wielding terrorists with just his baton

Tech companies say encryption is vital and compromising it won’t just stop extremists. Encryption also protects bank accounts, credit card transactions and all kinds of other information that people want to keep private. But others — including former FBI Director James Comey and Democratic Sen. Dianne Feinstein of California — have argued that the inability to access encrypted data is a threat to security. Feinstein has introduced a bill to give the government so-called “back door” access to encrypted data.

Q. Shouldn’t tech companies be forced to share encrypted information if it could protect national security?

A. Weakening encryption won’t make people safer, says Richard Forno, who directs the graduate cybersecurity program at the University of Maryland, Baltimore County. Terrorists will simply take their communications deeper underground by developing their own cyber channels or even reverting to paper notes sent by couriers, he said.

“It’s playing whack-a-mole,” he said. “The bad guys are not constrained by the law. That’s why they’re bad guys.”

But Erik Gordon, a professor of law and business at the University of Michigan, says society has sometimes determined that the government can intrude in ways it might not normally, as in times of war. He says laws may eventually be passed requiring companies to share encrypted data if police obtain a warrant from a judge.

“If we get to the point where we say, ‘Privacy is not as important as staying alive,’ I think there will be some setup which will allow the government to breach privacy,” he said.

Q. Is it really the tech companies’ job to police the internet and remove content?

A. Tech companies have accepted that this is part of their mission. In a Facebook post earlier this year, CEO Mark Zuckerberg said the company was developing artificial intelligence so its computers can tell the difference between news stories about terrorism and terrorist propaganda. “This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” Zuckerberg said.

WATCH: B.C. woman among the dead in London attacks



But Gordon says internet companies may not go far enough, since they need users in order to sell ads.

“Think of the hateful stuff that is said. How do you draw the line? And where the line gets drawn determines how much money they make,” he said.

READ MORE: Londoners fought back by hurling chairs and bottles at armed attackers

Others say the focus on tech companies and their responsibilities is misplaced. Ross Anderson, a professor of security engineering at the University of Cambridge, says blaming Facebook or Google for the spread of terrorism is like blaming the mail system or the phone company for Irish Republican Army violence 30 years ago. Instead of working together to censor the internet, Anderson says, governments and companies should work together to share information more quickly.

Former Secretary of State John Kerry also worries about placing too much blame on the internet instead of the underlying causes of violence.

“The bottom line is that in too many places, in too many parts of the world, you’ve got a large gap between governance and people and between the opportunities those people have,” Kerry said Sunday on NBC’s Meet the Press.

  • With files from Global News


Go to Source