10 interesting stories served every morning and every evening.




1 1,271 shares, 47 trendiness, 2339 words and 18 minutes reading time

VERIZON / YAHOO! BAD FORM!

Well, as of some­time on December 5th a huge num­ber of the archivists that were scram­bling to res­cue archives from Yahoo Groups, had their {e-mail ad­dresses} ap­par­ently banned  so they can no longer res­cue the archives any­more of the groups they had set up op­er­a­tions to do so.

So that means that for me any­way, Verizon has lost all ben­e­fit of the doubt, and is likely at least aware of what Yahoo is do­ing to groups, or is at worst, com­plicit.

We are re­ceiv­ing com­ments and mes­sages from the frus­trated and an­gry groups archivists, and some of those are posted be­low. You can send in your own if you e-mail it to owlsy@ya­hoo.com . I won’t put up threat­en­ing, nasty or vit­ri­olic mes­sages how­ever, so you can be an­gry, but be an­gry in a con­trolled way that makes us bet­ter than them.

From some of our archivists, dated Wednesday, December 4th:

The Archive Team (who is work­ing with to save con­tent to up­load to the Internet Archive) was again blocked by Yahoo The block is wip­ing out the past month of work done by hun­dreds of vol­un­teers.

This info was re­ported on their IRC chan­nel.

Yahoo banned all the email ad­dresses that the Archive Team vol­un­teers had been us­ing to join Yahoo Groups in or­der to down­load data. Verizon has also made it im­pos­si­ble for the Archive Team to con­tinue us­ing semi-au­to­mated scripts to join Yahoo Groups — which means each group must be re-joined one by one, an im­pos­si­ble task (redo the work of the past 4 weeks over the next 10 days)

On top of that, some­thing Yahoo did has killed the last third party tool that users and own­ers have been us­ing to ac­cess their mes­sages, pho­tos and files. (PGOlffine).. Note: not every­one who paid for the PGOffline li­cense is be­ing im­pacted by the prob­lem. but the de­vel­oper does not have a workaround. Here is their post about it.

Yahoo’s own data tools do not pro­vide Group Photos and, as in my case, for two IDs I keep get­ting the data from an­other Yahoo ac­count.

The Internet Archive/Archive Team Faces 80% Loss of Data Due To Verizon Blocking

@betamax meedee+: we’ve lost the ac­cess to the vast ma­jor­ity of the groups we joined [because Verizon blocked ac­cess to our ac­counts]

…. the ef­fect is that some per­cent­age…..of the signed up groups can no longer be fetched from ….”

They are work­ing to get a fi­nal num­ber but the Archive Team es­ti­mates that is a 80% loss of the Groups they and their vol­un­teers spent the last month join­ing in prepa­ra­tion for archiv­ing.

For our com­mu­nity, this is a 100% loss of the Groups we sub­mit­ted to the Archive Team for archiv­ing(30,000).

Thanks for reach­ing out. It would be re­ally great if you could both for­ward it to Verizon and pub­lish it on the blog. We had a lot of external’ (non Archive Team) vol­un­teers help­ing out, with the ex­pec­ta­tion that the groups they were join­ing would be saved, and it is im­por­tant to com­mu­ni­cate to them that Yahoo have ba­si­cally de­stroyed most of our progress and work will now need to be­gin from scratch. I want to avoid a sit­u­a­tion where some­one comes along in six months time, ask­ing for a group they ex­pected to be saved be­cause they joined it, and hav­ing to tell them we did­n’t man­age to save it.

It would also be good to com­mu­ni­cate to Yahoo our dis­ap­point­ment that they de­cided to block our archival ef­forts with­out open­ing a di­a­logue with us. We’ve al­ways said we’d be happy to work with Yahoo to archive Groups in a way that min­i­mizes dis­rup­tion to their ser­vices. Realistically the only way we’re go­ing to get any­where near the num­ber of groups we had joined prior to their mass-ban­ning of our ac­counts is with an ex­ten­sion to the dele­tion date, which I know you’ve been push­ing strongly for. (The best’ so­lu­tion would be for Verizon to un-ban our ac­counts, but I doubt that is go­ing to hap­pen).

Many thanks for your fan­tas­tic work so far in keep­ing the spot­light and pres­sure on Yahoo / Verizon.

I hope to ad­dress your con­cerns and add some clar­ity on the is­sues you’re ref­er­enc­ing.

Regarding the 128 peo­ple that joined Yahoo Groups with the goal to archive them — are those peo­ple from Archiveteam.org? If so, their ac­tions vi­o­lated our Terms of Service. Because of this vi­o­la­tion, we are un­able reau­tho­rize them. Also, mod­er­a­tors of Groups can ban users if they vi­o­late their Groups’ terms, so pre­vi­ously banned mem­bers will be un­able to down­load con­tent from that Group. If you can send the user in­for­ma­tion, we can in­ves­ti­gate the cause of lack of ac­cess.

The Groups Download Manager will down­load any con­tent an in­di­vid­ual posted to Yahoo Groups. However, it will not down­load at­tach­ments and pho­tos up­loaded to the Group by other mem­bers. For those that are hav­ing dif­fi­culty with the files de­liv­ered, this help ar­ti­cle ex­plains the types of files within the .zip file sent and how to find third party ap­pli­ca­tions that open them. This is the only way that we can de­liver data to our users.

While users will no longer be able to post or up­load con­tent to the site, the email func­tion­al­ity ex­ists. If you are hav­ing is­sues with this fea­ture, please reach out to YahooGroupsEscalations@verizonmedia.com and we will work to fix the prob­lem with any de­lay.

I un­der­stand your us­age of groups is dif­fer­ent from the ma­jor­ity of our users, and we un­der­stand your frus­tra­tion. However, the re­sources needed to main­tain his­tor­i­cal con­tent from Yahoo Groups pages is cost-pro­hib­i­tive, as they’re largely un­used.

Regarding your con­cern around time­line, on 10/16, we posted this help page and be­gan send­ing emails to Groups users ex­plain­ing the changes to come, in­clud­ing the 12/14 dead­line for down­load re­quest.

On 11/12 and 11/19, we con­firmed in email to you that so long as a re­quest to down­load was put in by December 14th, we will en­sure your down­load is com­plete be­fore any dele­tion. This is the case for all Groups users and step-by-step in­struc­tions are be­ing sent now to users to sup­port them through the process.

We rec­og­nize this tran­si­tion may be dif­fi­cult, and we’d like to pro­vide as much as­sis­tance and clar­ity as we can.

From there, I sent two replies — one when I was an­grier, and one when I calmed down

VERIZON/YAHOO does NOT un­der­stand our frus­tra­tion.

THE ACCESS IS OFF AND ON.

YAHOO has mis­treated us and abused us all for 6 years, and it’s all com­ing out this week­end. VERIZON owns Yahoo now, and is the one who should clean up their mess.

If we are al­lowed to have our archives from Verizon/Yahoo, then we should be able to get them our­selves any way we wish. All Verizon plans to do is throw it away. The stuff you send us is messed up, bro­ken, in­com­plete, and virus rid­den. It is UNACCEPTABLE as a so­lu­tion.

The 128 peo­ple you banned were REQUESTED by the group own­ers to get their stuff.

Verizon re­fuses to give us more time to get it. We can’t do it in 7 days.

So, we will con­tinue to press pub­lic opin­ion about this is­sue, in every arena that we can with all that we have to make this dele­tion stop and give us more time, and work some­thing out with Verizon to get our stuff, which is costly to store, and go away.

Bottom line. Give us our archives and we’ll leave and never come back.

……And this is the sec­ond e-mail I sent when I calmed down:

Please re­lay this to Verizon/Yahoo.

This is the un­fair­ness that Verizon has shown to Yahoo Groups Users.

First of all, the ini­tial let­ter said that Verizon had re­searched the use of Yahoo Groups. This is a com­plete false­hood. You know how I know it is? Because if you had truly re­searched the use of Yahoo Groups, you’d have found the 30,000 ac­tive groups that are used of­ten, many on a daily ba­sis, de­spite them be­ing bro­ken in many ar­eas. But Verizon did not do that. If they had, they would have found:

A po­lice co­op­er­a­tive in Washington DC that was us­ing them as a net­work to com­mu­ni­cate with their re­spec­tive neigh­bor­hoods with over 17,000 mem­bers.

A phone com­pany in the UK that as­signs phone num­bers us­ing the groups and now will lose all those phone des­ig­na­tions when it’s deleted.

A Birding group in new Delhi with 2,000 mem­bers that has col­lected data and re­search on birds for TWO DECADES.

An Adoption group in France, that has been us­ing it for years and years to com­mu­ni­cate and share his­tory and pho­tos and more.

They also would have found:

Numerous sup­port groups for peo­ple who are sui­ci­dal or de­pressed.

Numerous med­ical groups for peo­ple to com­mu­ni­cate more ef­fec­tively with their doc­tors.

Numerous sup­port and help groups for the Elderly.

Numerous Historical groups for WW2 Veterans, Vietnam Veterans, and etc.

Numerous sci­ence groups that have used them for years and have all their re­search there.

Numerous fan fic­tion groups or arts groups that have shared their work for years.

No Verizon, these groups are NOT largely un­used. You just did­n’t do your home­work. You did­n’t find us, who could have told you they were used all the time. You did­n’t make an ef­fort to un­der­stand what groups were, and how they were used. You just de­cided they weren’t be­ing used, and weren’t im­por­tant and de­cided, Hey, let’s just delete them!”

So not re­search­ing thor­oughly, and prob­a­bly lis­ten­ing to Yahoo’s telling you they weren’t be­ing used was your first mis­take.

The sec­ond mis­take, when the in­tial plan to delete our archives was hatched, some peo­ple learned it on October 17th. Not every­one did. That’s be­cause it was very quiet and low key, and not every­one goes to the web­site all the time. Many use it when they need to search for some­thing, or check out a photo. The point is it’s a mail list with an archive.

Some e-mails were sent to mem­bers, and per­haps maybe all were sent, but be­lieve me, all did not ar­rive on that date. People be­gan find­ing out by re­ceiv­ing let­ters days, weeks, and even a full month later. This is be­cause Groups e-mail is un­re­li­able. Period. It has been un­re­li­able since Mayer broke it in 2013.

Even those learn­ing it on October 17th, had a very small win­dow of time to save their group. It was a pe­riod of 58 days, less than 2 months. That was never go­ing to be enough time for over 30,000 groups to save their archives.

As you said, pho­tos were not down­loaded from the groups as a group. So any­one who wanted them had to man­u­ally save them by hand. And there are groups with thou­sands or hun­dreds of thou­sands of pho­tos, es­pe­cially art groups and pho­tog­ra­phy groups. It could­n’t be done in the time we were given.

This de­lay in us find­ing out, left barely weeks for some peo­ple to act. Thus the panic all over the Internet, which Verizon would see if they would just look, was im­me­di­ate and fu­ri­ous. They came to me, be­cause I stopped the de­struc­tion of Yahoo Groups in 2010, and have been cru­sad­ing against Yahoo for their un­eth­i­cal treat­ment of us, and fight­ing to pro­tect our archives for 6 years.

So, as I had kept the Crusade blog up and the Crusade groups open, they all fled back to me. And this time, it was­n’t that we could­n’t get them [our archives] and leave, and it was­n’t that they had bro­ken ac­cess. This time Verizon planned to­tal de­struc­tion. This time it was for keeps. So of course, I had to start the Crusade again and take it to high gear.

So the best thing Verizon could do, since they are just go­ing to throw us all into the trash any­way, as we aren’t im­por­tant to them, is let us get our archives any way we can.

The terms of ser­vice re­ally should not ap­ply to peo­ple who have been told, we’re gonna delete you from ex­is­tence. If it’s law­ful for us to get them from you, in bro­ken buggy and virus rid­den state, it’s just as law­ful for us to get them our­selves.

Yahoo made a huge mess of us. I tried to warn Verizon even be­fore they pur­chased it. They did not lis­ten. Now they have the mess on their hands, and we will not be silent. Even if Verizon re­ally does delete us out of ex­is­tence, we will never stop telling the world what they and Yahoo did. Because it’s high time Yahoo Groups Users start be­ing treated as hu­man be­ings again.

Bottom line. Verizon is be­ing un­fair to a group of peo­ple us­ing a prop­erty they pur­chased, they are be­ing un­eth­i­cal in the man­ner in which they an­nounced it, and un­rea­son­able in the time that they gave us to ac­com­plish it.

So we de­mand fair and equal treat­ment, and will stand up for it un­til we get it.

Ooohkay-then.  Thats how they want to be. As you know, the Owl’s gloves are off and this re­ally nails it home how com­plicit Verizon is go­ing to be.  At the same time, press has be­come aware of Verizon/Yahoo’s choice to be­come an­other sta­tis­tic for bad PR over the mat­ter.  We too have this power, and some ar­ti­cles about our cause are al­ready pend­ing over this week­end.

It is time that we take this up a notch.  Archiving is still go­ing on through the var­i­ous teams, but we can be­gin the fight on an­other side of this sce­nario.  Won’t you join us?

...

Read the original on modsandmembersblog.wordpress.com »

2 1,169 shares, 44 trendiness, 904 words and 7 minutes reading time

How to fight back against Google AMP as a web user and a web developer – Marko Saric

There’s a pop­u­lar thread on Hacker News with lots of peo­ple com­plain­ing about how Google AMP (Accelerated Mobile Pages) is ru­in­ing their mo­bile web ex­pe­ri­ence.

This week I also got two AMP links sent to me via Telegram and to see those Google URLs re­plac­ing unique do­main names made me a bit sad on be­half of the own­ers of those sites. As a site owner my­self, it feels like sov­er­eignty of a web­site be­ing taken away.

Other than peo­ple shar­ing links with me, I rarely en­counter AMP in the wild. It is pos­si­ble to re­strict Google AMP from your life both as a web user and as a web de­vel­oper. Here’s how you can fight back against Google AMP.

Other search en­gines such as Qwant and DuckDuckGo don’t rank AMP sites. So tak­ing the step of switch­ing from Google search to a more eth­i­cal choice re­moves most of the AMP touch points you might have.

It’s sim­ple to switch the de­fault search en­gine in your browser of choice. You can do it in the browser set­tings di­rectly. Anything other than Google will get you in the no-AMP ter­ri­tory.

But now you might say all those non-AMP web­sites you visit are full of ad­ver­tis­ing, dis­trac­tions and are slow to load? There’s a so­lu­tion for that too.

Firefox is a great browser al­ter­na­tive that is worth a try. Just vis­it­ing a site with Firefox’s Enhanced Tracking Protection on makes a faster and less in­tru­sive web. It’s a built-in blocker of in­tru­sive ads and in­vis­i­ble scripts.

Firefox also has a Reader Mode so any site can be clut­ter-free even with­out AMP. And these fea­tures work both on Firefox for desk­top and Firefox for mo­bile. Here’s how the Reader View looks like in Firefox Preview for Android:

Publishers and other site own­ers feel forced to use AMP as they fear that they’ll lose Google vis­i­bil­ity and traf­fic with­out it. These are the forces some pub­lish­ers can­not re­sist un­til more peo­ple stop us­ing Google Chrome and search.

You as a site owner or de­vel­oper are a dif­fer­ent case. I like the idea of a faster and dis­trac­tion-free web but I don’t like the idea of web be­ing con­trolled and molded by one com­pany. Especially not one that is the largest ad­ver­tis­ing com­pany in the world.

This is the Googled-web Google wants to see you de­velop. The web delivered by Google”. Your site be­ing in­te­grated with all the other cool Google prod­ucts such as Analytics and AdSense.

I en­joy vis­it­ing sites cre­ated by real peo­ple. The AMP pages are more bor­ing, less di­verse, less com­pet­i­tive, less func­tional and have less per­son­al­ity.

The main rea­son AMP ex­ists is that the sites are slow to load. But why are the sites slow to load in the first place? They fea­ture many un­nec­es­sary third-party el­e­ments that do noth­ing for the user ex­pe­ri­ence other than slow it all down.

Google them­selves will point the fin­ger at their own an­a­lyt­ics and ads if you use their web­page speed tests to mea­sure the per­for­mance of your site. They even pro­vide guides on how to make third-party re­sources less slow.

Analytics scripts, ad­ver­tis­ing scripts, so­cial me­dia scripts and so much more junk. It is nor­mal to visit a site and the ma­jor­ity of it is com­posed of un­nec­es­sary el­e­ments that you don’t see. This is why the web is so much faster with Firefox’s Enhanced Tracking Protection or with an ad­blocker.

Firefox blocks al­most 30 dif­fer­ent track­ers on a sin­gle page of Wired. It also blocks the auto-play of video and au­dio. This is about 30% of the to­tal page weight. It’s im­por­tant to note that Wired still gets to dis­play their ban­ners for peo­ple to sub­scribe to the mag­a­zine.

As a reader, you don’t re­ally see any dif­fer­ence at all in the ar­ti­cle that you’re read­ing. All this con­tent that they try to load and that is blocked by Firefox is not use­ful to you.

Here are some stats:

* 94% of sites in­clude at least one third-party re­source

* 56% in­clude at least one ad re­source

* Google owns 7 of the top 10 most pop­u­lar third-party calls

And what are these calls that Google owns? They’re things such as Google Analytics, Google Fonts and Google’s DoubleClick ad­ver­tis­ing scripts.

So you can see why there must be some kind of in­ter­nal strug­gle at Google. They un­der­stand the value of a faster web but they also can­not go af­ter the main cause of the slow web. And this is how tech­nol­ogy such as AMP gets in­vented and makes things worse.

We should be treat­ing the cause of this slow web dis­ease in­stead.

It’s pos­si­ble to make your site faster than an AMP site with­out us­ing AMP. You need to put the speed as the pri­or­ity when de­vel­op­ing.

* Restrict un­nec­es­sary el­e­ments. Understand every re­quest your site is mak­ing and con­sider how use­ful they are. Do those flash­ing and dis­tract­ing calls-to-ac­tion ac­tu­ally make a dif­fer­ence to the goals you have or are they sim­ply an­noy­ing 99% of peo­ple that visit your site? Do you re­ally need auto-play­ing videos?

* Restrict third-party con­nec­tions and scripts. Do you ac­tu­ally need Google fonts? Do you need the of­fi­cial so­cial me­dia share but­tons? Do you need to col­lect all that be­hav­ioral data that you may never look at? There are bet­ter and lighter so­lu­tions for each of these.

* Lazy load im­ages and videos. There’s sim­ply no rea­son to load your full page and every­thing on it as soon as a vis­i­tor en­ters your site. Lazy load­ing only loads im­ages in the browser’s view and the rest only as the vis­i­tor scrolls down the page.

* I have a full guide on steps I took to speed up my WordPress site and get top scores on the dif­fer­ent web­page speed tests.

By do­ing this your orig­i­nal site will load faster than the AMP sites. And the web ex­pe­ri­ence will be bet­ter, more open and more di­verse to every­one.

I also tweet about things that I care about, think about and work on. If you’d like to hear more, do fol­low me or add your email to my oc­ca­sional newslet­ter.

I’m a dig­i­tal mar­keter who grows star­tups and traf­fic us­ing con­tent, search and so­cial me­dia mar­ket­ing. I’m avail­able for hire so if you have a mar­ket­ing prob­lem that you’d like my help with, reach out to me by email­ing hi at my full name dot com.Learn more about me.

...

Read the original on markosaric.com »

3 1,127 shares, 47 trendiness, 4155 words and 31 minutes reading time

The Lesson to Unlearn

December 2019

The most dam­ag­ing thing you learned in school was­n’t some­thing you

learned in any spe­cific class. It was learn­ing to get good grades.

When I was in col­lege, a par­tic­u­larly earnest phi­los­o­phy grad stu­dent

once told me that he never cared what grade he got in a class, only

what he learned in it. This stuck in my mind be­cause it was the

only time I ever heard any­one say such a thing.

For me, as for most stu­dents, the mea­sure­ment of what I was learn­ing

com­pletely dom­i­nated ac­tual learn­ing in col­lege. I was fairly

earnest; I was gen­uinely in­ter­ested in most of the classes I took,

and I worked hard. And yet I worked by far the hard­est when I was

study­ing for a test.

In the­ory, tests are merely what their name im­plies: tests of what

you’ve learned in the class. In the­ory you should­n’t have to pre­pare

for a test in a class any more than you have to pre­pare for a blood

test. In the­ory you learn from tak­ing the class, from go­ing to the

lec­tures and do­ing the read­ing and/​or as­sign­ments, and the test

that comes af­ter­ward merely mea­sures how well you learned.

In prac­tice, as al­most every­one read­ing this will know, things are

so dif­fer­ent that hear­ing this ex­pla­na­tion of how classes and tests

are meant to work is like hear­ing the et­y­mol­ogy of a word whose

mean­ing has changed com­pletely. In prac­tice, the phrase studying

for a test” was al­most re­dun­dant, be­cause that was when one re­ally

stud­ied. The dif­fer­ence be­tween dili­gent and slack stu­dents was

that the for­mer stud­ied hard for tests and the lat­ter did­n’t. No

one was pulling all-nighters two weeks into the se­mes­ter.

Even though I was a dili­gent stu­dent, al­most all the work I did in

school was aimed at get­ting a good grade on some­thing.

To many peo­ple, it would seem strange that the pre­ced­ing sen­tence

has a though” in it. Aren’t I merely stat­ing a tau­tol­ogy? Isn’t

that what a dili­gent stu­dent is, a straight-A stu­dent? That’s how

deeply the con­fla­tion of learn­ing with grades has in­fused our

cul­ture.

Is it so bad if learn­ing is con­flated with grades? Yes, it is bad.

And it was­n’t till decades af­ter col­lege, when I was run­ning Y Combinator, that I re­al­ized how bad it is.

I knew of course when I was a stu­dent that study­ing for a test is

far from iden­ti­cal with ac­tual learn­ing. At the very least, you

don’t re­tain knowl­edge you cram into your head the night be­fore an

exam. But the prob­lem is worse than that. The real prob­lem is that

most tests don’t come close to mea­sur­ing what they’re sup­posed to.

If tests truly were tests of learn­ing, things would­n’t be so bad.

Getting good grades and learn­ing would con­verge, just a lit­tle late.

The prob­lem is that nearly all tests given to stu­dents are ter­ri­bly

hack­able. Most peo­ple who’ve got­ten good grades know this, and know

it so well they’ve ceased even to ques­tion it. You’ll see when you

re­al­ize how naive it sounds to act oth­er­wise.

Suppose you’re tak­ing a class on me­dieval his­tory and the fi­nal

exam is com­ing up. The fi­nal exam is sup­posed to be a test of your

knowl­edge of me­dieval his­tory, right? So if you have a cou­ple days

be­tween now and the exam, surely the best way to spend the time,

if you want to do well on the exam, is to read the best books you

can find about me­dieval his­tory. Then you’ll know a lot about it,

and do well on the exam.

No, no, no, ex­pe­ri­enced stu­dents are say­ing to them­selves. If you

merely read good books on me­dieval his­tory, most of the stuff you

learned would­n’t be on the test. It’s not good books you want to

read, but the lec­ture notes and as­signed read­ing in this class.

And even most of that you can ig­nore, be­cause you only have to worry

about the sort of thing that could turn up as a test ques­tion.

You’re look­ing for sharply-de­fined chunks of in­for­ma­tion. If one

of the as­signed read­ings has an in­ter­est­ing di­gres­sion on some

sub­tle point, you can safely ig­nore that, be­cause it’s not the sort

of thing that could be turned into a test ques­tion. But if the

pro­fes­sor tells you that there were three un­der­ly­ing causes of the

Schism of 1378, or three main con­se­quences of the Black Death, you’d

bet­ter know them. And whether they were in fact the causes or

con­se­quences is be­side the point. For the pur­poses of this class

they are.

At a uni­ver­sity there are of­ten copies of old ex­ams float­ing around,

and these nar­row still fur­ther what you have to learn. As well as

learn­ing what kind of ques­tions this pro­fes­sor asks, you’ll of­ten

get ac­tual exam ques­tions. Many pro­fes­sors re-use them. After

teach­ing a class for 10 years, it would be hard not to, at least

in­ad­ver­tently.

In some classes, your pro­fes­sor will have had some sort of po­lit­i­cal

axe to grind, and if so you’ll have to grind it too. The need for

this varies. In classes in math or the hard sci­ences or en­gi­neer­ing

it’s rarely nec­es­sary, but at the other end of the spec­trum there

are classes where you could­n’t get a good grade with­out it.

Getting a good grade in a class on x is so dif­fer­ent from learn­ing

a lot about x that you have to choose one or the other, and you

can’t blame stu­dents if they choose grades. Everyone judges them

by their grades �graduate pro­grams, em­ploy­ers, schol­ar­ships, even

their own par­ents.

I liked learn­ing, and I re­ally en­joyed some of the pa­pers and

pro­grams I wrote in col­lege. But did I ever, af­ter turn­ing in a

pa­per in some class, sit down and write an­other just for fun? Of

course not. I had things due in other classes. If it ever came to

a choice of learn­ing or grades, I chose grades. I had­n’t come to

col­lege to do badly.

Anyone who cares about get­ting good grades has to play this game,

or they’ll be sur­passed by those who do. And at elite uni­ver­si­ties,

that means nearly every­one, since some­one who did­n’t care about

get­ting good grades prob­a­bly would­n’t be there in the first place.

The re­sult is that stu­dents com­pete to max­i­mize the dif­fer­ence

be­tween learn­ing and get­ting good grades.

Why are tests so bad? More pre­cisely, why are they so hack­able?

Any ex­pe­ri­enced pro­gram­mer could an­swer that. How hack­able is

soft­ware whose au­thor has­n’t paid any at­ten­tion to pre­vent­ing it

from be­ing hacked? Usually it’s as porous as a colan­der.

...

Read the original on paulgraham.com »

4 945 shares, 34 trendiness, 955 words and 8 minutes reading time

The “Great Cannon” has been deployed again

The Great Cannon is a dis­trib­uted de­nial of ser­vice tool (“DDoS”) that op­er­ates by in­ject­ing ma­li­cious Javascript into pages served from be­hind the Great Firewall. These scripts, po­ten­tially served to mil­lions of users across the in­ter­net, hi­jack the users’ con­nec­tions to make mul­ti­ple re­quests against the tar­geted site. These re­quests con­sume all the re­sources of the tar­geted site, mak­ing it un­avail­able:

Figure 1: Simplified di­a­gram of how the Great Cannon op­er­ates

The Great Cannon was the sub­ject of in­tense re­search af­ter it was used to dis­rupt ac­cess to the web­site Github.com in 2015. Little has been seen of the Great Cannon since 2015. However, we’ve re­cently ob­served new at­tacks, which are de­tailed be­low.

The Great Cannon is cur­rently at­tempt­ing to take the web­site LIHKG of­fline. LIHKG has been used to or­ga­nize protests in Hong Kong. Using a sim­ple script that uses data from UrlScan.io, we iden­ti­fied new at­tacks likely start­ing Monday November 25th, 2019.

Websites are in­di­rectly serv­ing a ma­li­cious javascript file from ei­ther:

Normally these URLs serve stan­dard an­a­lyt­ics track­ing scripts. However, for a cer­tain per­cent­age of re­quests, the Great Cannon swaps these on the fly with ma­li­cious code:

The code at­tempts to re­peat­edly re­quest the fol­low­ing re­sources, in or­der to over­whelm web­sites and pre­vent them from be­ing ac­ces­si­ble:

These may seem like an odd se­lec­tion of web­sites and memes to tar­get, how­ever these meme im­ages ap­pear on the LIHKG fo­rums so the traf­fic is likely in­tended to blend in with nor­mal traf­fic. The URLs are ap­pended to the LIHKG im­age proxy url (eg; https://​na.cx/​i/​6hx­p6x9.gif becomes  https://​i.lih.kg/​540/​https://​na.cx/​i/​6hx­p6x9.gif?t=6009966493) which causes LIHKG to per­form the band­width and com­pu­ta­tion­ally ex­pen­sive task of tak­ing a re­mote im­age, chang­ing its size, then serv­ing it to the user.

It is un­likely these sites will be se­ri­ously im­pacted. Partly due to LIHKG sit­ting be­hind an anti-DDoS ser­vice, and partly due to some bugs in the ma­li­cious Javascript code that we won’t dis­cuss here.

Still, it is dis­turb­ing to see an at­tack tool with the po­ten­tial power of the Great Cannon used more reg­u­larly, and again caus­ing col­lat­eral dam­age to US based ser­vices.

These at­tacks would not be suc­cess­ful if the fol­low­ing re­sources were served over HTTPS in­stead of HTTP:

You may want to con­sider block­ing these URLs when not sent over HTTPS.

Below we have de­scribed pre­vi­ous Great Cannon at­tacks, in­clud­ing pre­vi­ous at­tacks against LIHKG in September 2019.

During the 2015 at­tacks, DDoS scripts were sent in re­sponse to re­quests sent to a num­ber of do­mains, for both Javascript and HTML pages served over HTTP from be­hind the Great Firewall.

A num­ber of dis­tinct stages and tar­gets were iden­ti­fied:

* March 3 to March 6, 2015: Initial, lim­ited test fir­ing of the Great Cannon starts.

* March 13: New at­tacks against an or­ga­ni­za­tion that mon­i­tors cen­sor­ship (GreatFire.org).

Figure 3: Snippet of the code used in early Great Cannon at­tacks. Later scripts were im­proved to not re­quire ex­ter­nal Javascript li­braries.

* March 25: Attacks against GitHub.com start, tar­get­ing con­tent hosted from the site GreatFire.org and a Chinese edi­tion of the New York Times. This re­sulted in a global out­age of the GitHub ser­vice.

Figure 4: The URLs tar­geted in the at­tack against Github.com.

* March 26th - Attacks be­gan us­ing code hid­den with the Javascript ob­fus­ca­tor packer”:

Figure 5: Snippet of the ob­fus­cated code. Current at­tacks con­tinue to use the same ob­fus­ca­tion.

Research by CitizenLab iden­ti­fied mul­ti­ple likely points where the ma­li­cious code is in­jected. The Great Cannon op­er­ated prob­a­bilis­ti­cally, in­ject­ing re­turn pack­ets to a cer­tain per­cent­age of re­quests for Javascript from cer­tain IP ad­dresses. As noted by com­men­ta­tors at the time, the same func­tion­al­ity could also be used to in­sert ex­ploita­tion code to en­able Man-on-the-side” at­tacks to com­pro­mise key tar­gets.

In August 2017, Great Cannon at­tacks against a Chinese-language news web­site (Mingjingnews.com) were iden­ti­fied by a user on Stack Overflow. The code in the 2017 at­tack is sig­nif­i­cantly re-writ­ten and is largely un­changed in the at­tacks seen in 2019.

Figure 6: An ex­cerpt of the code to tar­get Mingjingnews.com in 2017.

We have con­tin­ued to see at­tacks against Mingjingnews in the last year.

On August 31, 2019, the Great Cannon ini­ti­ated an at­tack against a web­site (lihkg.com) used by mem­bers of the Hong Kong democ­racy move­ment to plan protests.

The Javascript code is very sim­i­lar to the packer code used in the at­tacks against Mingjingnews ob­served in 2017 and on­ward, and the code was served from at least two lo­ca­tions:

Later ver­sions tar­geted mul­ti­ple pages and at­tempted (unsuccessfully) to by­pass DDoS mit­i­ga­tions that the web­site own­ers had im­ple­mented.

We de­tect the Great Cannon serv­ing ma­li­cious Javascript with the fol­low­ing Suricata rules from AT&T Alien Labs and Emerging Threats Open.

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:“AV INFO JS File as­so­ci­ated with Great Cannon DDoS”; flow:to_server,es­tab­lished; con­tent:“GET”; http_method; con­tent:“push.js”; http_uri; con­tent:“push.zhanzhang.baidu.com”; http_host; flow­bits:set,AV­Can­nonD­DOS; flow­bits:noalert; classtype:misc-ac­tiv­ity; sid:4001470; rev:1;)

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:“AV INFO JS File as­so­ci­ated with Great Cannon DDoS”; flow:to_server,es­tab­lished; con­tent:“GET”; http_method; con­tent:“11.0.1.js”; http_uri; con­tent:“js.pass­port.qi­hucdn.com”; http_host; flow­bits:set,AV­Can­nonD­DOS; flow­bits:noalert; classtype:misc-ac­tiv­ity; sid:4001471; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:“AV INFO Potential DDoS at­tempt re­lated to Great Cannon Attacks”; flow:es­tab­lished,to_­client; con­tent:“200″; http_­s­tat_­code; file_­data; con­tent:“isImg­Com­plete”; flow­bits:is­set,AV­Can­nonD­DOS; ref­er­ence:url,otx.alien­vault.com/​pulse/​5d6d4­da02ee2b6f­bf­f703067; classtype:pol­icy-vi­o­la­tion; sid:4001473; rev:1;)

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:“AV INFO JS File as­so­ci­ated with Great Cannon DDoS”; flow:to_server,es­tab­lished; con­tent:“GET”; http_method; con­tent:“hm.js”; http_uri; con­tent:“hm.baidu.com”; http_host; flow­bits:set,AV­Can­nonD­DOS; flow­bits:noalert; classtype:misc-ac­tiv­ity; sid:4001472; rev:1;)

ET WEB_CLIENT Great Cannon DDoS JS M1 sid:2027961

ET WEB_CLIENT Great Cannon DDoS JS M2 sid:2027962

ET WEB_CLIENT Great Cannon DDoS JS M3 sid:2027963

ET WEB_CLIENT Great Cannon DDoS JS M4 sid:2027964

Additional in­di­ca­tors and code sam­ples are avail­able in the Open Threat Exchange pulse.

...

Read the original on cybersecurity.att.com »

5 754 shares, 23 trendiness, 1626 words and 14 minutes reading time

Why Taxpayers Pay McKinsey $3M a Year for a Recent College Graduate Contractor

Welcome to BIG, a newslet­ter about the pol­i­tics of mo­nop­oly. If you’d like to sign up, you can do so here. Or just read on…

A few days ago, Ian MacDougall came out with a New York Times/ProPublica piece on how con­sult­ing gi­ant McKinsey struc­tured Trump im­mi­gra­tion pol­icy. Lots of peo­ple cover im­mi­gra­tion. I’m go­ing to dis­cuss why the gov­ern­ment buys over­priced ser­vices from McKinsey. (Spoiler: It goes back to, of course, Bill Clinton.)

First, I’ll be do­ing a book talk in D. C. on the evening of Dec 10th and one in New York on the evening of Dec 18th for my book Goliath: The 100-Year War Between Monopoly Power and Democracy. There’s info be­low my sig­na­ture with de­tails. Business Insider named Goliath one of the best books of the year on how we can re­think cap­i­tal­ism. If you have thoughts on the book, as al­ways, let me know.

In case you’re in the lis­ten­ing to pod­cast mode, I was re­cently on Lapham’s Quarterly pod­cast and Pitchfork Economics with Nick Hanauer to talk mo­nop­oly and Goliath.

As reg­u­lar read­ers of BIG know, my ba­sic the­ory of the world is that most of our po­lit­i­cal econ­omy prob­lems are caused by these guys be­ing in charge of every­thing.

The Point of McKinsey: Charging $3 Million a Year for the Work of a 23-Year Old

McKinsey has a lot of high-fly­ing rhetoric about strat­egy, sus­tain­abil­ity, and so­cial jus­tice. The com­pany os­ten­si­bly pur­sues in­tel­lec­tual and busi­ness ex­cel­lence, while also us­ing its peo­ple skills to help Syrian refugees. That’s nice.

But let’s start with what McKinsey is re­ally about, which is get­ting or­ga­ni­za­tional lead­ers to pay a large amount of money for fairly pedes­trian ad­vice. In MacDougall’s ar­ti­cle on McKinsey’s work on im­mi­gra­tion, most of the con­ver­sa­tion has been about McKinsey’s push to en­gage in cruel be­hav­ior to­wards de­tainees. But let’s not lose sight of the in­cen­tive dri­ving the re­la­tion­ship, which was McKinsey’s po­lit­i­cal abil­ity to ex­tract cash from the gov­ern­ment. Here’s the nub of that part of the story.

The con­sult­ing fir­m’s sway at ICE grew to the point that McKinsey’s staff even ghost­wrote a gov­ern­ment con­tract­ing doc­u­ment that de­fined the con­sult­ing team’s own re­spon­si­bil­i­ties and jus­ti­fied the fir­m’s re­ten­tion, a con­tract ex­ten­sion worth $2.2 mil­lion. Can they do that?” an ICE of­fi­cial wrote to a con­tract­ing of­fi­cer in May 2017. The re­sponse re­flects how deeply ICE had come to rely on McKinsey’s as­sis­tance. Well it ob­vi­ously is­n’t ideal to have a con­trac­tor tell us what we want to ask them to do,” the con­tract­ing of­fi­cer replied. But un­less some­one from the gov­ern­ment could ar­tic­u­late the agen­cy’s ob­jec­tives, the of­fi­cer added, what other op­tion is there?” ICE ex­tended the con­tract.

Such prac­tices used to be called honest graft.” And let’s be clear, McKinsey’s ser­vices are very ex­pen­sive. Back in August, I noted that McKinsey’s com­peti­tor, the Boston Consulting Group, charges the gov­ern­ment $33,063.75/week for the time of a re­cent col­lege grad to work as a con­trac­tor. Not to be out­done, McKinsey’s pric­ing is much much higher, with one McKinsey business an­a­lyst” - some­one with an un­der­grad­u­ate de­gree and no ex­pe­ri­ence - lent to the gov­ern­ment priced out at $56,707/week, or $2,948,764/year.

How does McKinsey do it? There are two an­swers. The first is sim­ple. They cheat. McKinsey is far more ex­pen­sive than its com­pe­ti­tion, and is able to get that pric­ing be­cause of its un­eth­i­cal tac­tics. In fact, the sit­u­a­tion is so dire that ear­lier this year the General Services Administration’s Inspector General rec­om­mended in a re­port that the GSA can­cel McKinsey’s en­tire gov­ern­ment-wide con­tract. Here’s what the IG showed McKinsey was even­tu­ally awarded.

The Inspector General il­lus­trated straight­for­ward cor­rup­tion at the GSA, which is the agency that sets sched­ules for how much things cost for the en­tire U. S. gov­ern­ment (and many states and lo­cal­i­ties, who also use GSA sched­ules).

What hap­pened is fairly sim­ple. McKinsey asked for 10-14% price hike for its al­ready ex­pen­sive IT pro­fes­sional ser­vices (which is a catch-all for any­thing). The gov­ern­ment con­tract­ing of­fi­cer said no, call­ing the pro­posal to up­date the fir­m’s con­tract sched­ule with much higher costs ridiculous.” So McKinsey went to the of­fi­cer’s boss, the Division Director. In 2016, a McKinsey rep­re­sen­ta­tive sent the fol­low­ing email to the GSA Division Director.

We would re­ally ap­pre­ci­ate it if you could as­sist us with our Schedule 70 ap­pli­ca­tion. In par­tic­u­lar, given that you un­der­stand our model, it would be enor­mously help­ful if you could help the Schedule 70 Contracting Officer un­der­stand how it ben­e­fits the gov­ern­ment.

The pes­ter­ing worked. The GSA Division Director seems to have had the con­tract re­as­signed and granted the price in­crease McKinsey wanted. The di­rec­tor also seems to have been ly­ing to the in­spec­tor gen­eral, as well as ma­nip­u­lat­ing pric­ing data, break­ing rules on sole source con­tract­ing, and pitch­ing var­i­ous other gov­ern­ment agen­cies, like the National Oceanic and Atmospheric Administration, to buy McKinsey ser­vices. Eventually the di­rec­tor straight up said, My only in­ter­est is help­ing out my con­trac­tor.”

From 2006 when McKinsey signed its orig­i­nal sched­ule price, to 2019, it re­ceived roughly $73.5 mil­lion/​year, or $956.2 mil­lion in to­tal rev­enue from the gov­ern­ment. The in­spec­tor gen­eral es­ti­mated the scam from 2016 on­ward would cost $69 mil­lion in to­tal over­pay­ments. It’s a scan­dal. But still, some­thing about it does­n’t quite make sense. Why would a gov­ern­ment di­vi­sion di­rec­tor at the GSA seek to in­crease costs for the gov­ern­ment? It’s not bribery, since the IG did­n’t rec­om­mend fir­ing or ar­rest­ing the gov­ern­ment of­fi­cial who pushed up costs (or at least that’s not in the IG re­port).

And this gets to the sec­ond rea­son why McKinsey can charge so much, which has to do less with McKinsey and more with an in­cen­tive to over­pay more gen­er­ally. It’s more likely some­thing called the Industrial Funding Fee,’ or IFF. The GSAs Federal Acquisition Service gets a cut of what­ever cer­tain con­trac­tors spend us­ing the GSAs sched­ule, and this cut is the IFF. The IFF is priced at .75% of the to­tal amount of a gov­ern­ment con­tract. In the case of McKinsey, since 2006, FAS has re­al­ized $7.2 mil­lion in Industrial Funding Fee rev­enue.”

In other words, the agency of the gov­ern­ment in charge of bulk buy­ing is­n’t paid for sav­ing money, but for spend­ing too much of it. The IFF also in­cen­tivizes the GSA to get the gov­ern­ment to out­source to con­trac­tors any­thing it can, sim­ply to get more bud­get. The IFF has been cre­at­ing prob­lems like the McKinsey over-pay­ment for a long time. In 2013, the GSA Inspector General traced a sim­i­lar sit­u­a­tion with dif­fer­ent con­trac­tors. Managers at GSA over­ruled line con­tract­ing of­fi­cers to raise prices tax­payer pay for con­trac­tors Carahsoft, Deloitte and Oracle. Government man­agers at GSA mi­cro-man­aged and ha­rassed their sub­or­di­nates and dam­aged the ca­reers of con­tract­ing of­fi­cers try­ing to ne­go­ti­ate fair prices for the tax­payer.

How did the GSA get such a screwed up in­cen­tive as the Industrial Funding Fee? Well, in 1995, to get the gov­ern­ment to be­come more en­tre­pre­neur­ial as part of its Reinventing Government” ini­tia­tive, Bill Clinton’s ad­min­is­tra­tion im­ple­mented the Industrial Funding Fee struc­ture. It worked in gen­er­at­ing money for the GSA. It worked so well that Congress’s in­ves­tiga­tive agency found in 2002 that the GSA stopped hav­ing to rely on Congressional ap­pro­pri­a­tions. It had so much ex­tra money that it started to spend lav­ishly on its fleet pro­gram,” which is to say ve­hi­cle pur­chases. In other words, the GSA earned so much money by out­sourc­ing the work of other gov­ern­ment agen­cies to high-priced con­trac­tors that it just started buy­ing fleets of ex­tra cars.

By 2010, GSA had sales” of $39 bil­lion a year. While the GSA is sup­posed to re­mit ex­tra rev­enue to the tax­payer, it of­ten just stuffs the money into over­funded re­serves. More fun­da­men­tally, the cul­ture of the pro­cure­ment agen­cies of gov­ern­ment has been com­pletely warped. The GSAs en­tire rea­son for ex­ist­ing was to do bet­ter pur­chas­ing for the gov­ern­ment, us­ing both ex­per­tise and mass buy­ing power to get value. But now of­fi­cials try to gen­er­ate rev­enue by get­ting the gov­ern­ment to spend more money on over­priced con­trac­tors. Like McKinsey.

Does McKinsey do a good job? The an­swer is that it’s prob­a­bly no bet­ter or worse than any­one else. I’m sure there are times when McKinsey is quite help­ful, but it’s in all prob­a­bil­ity vastly over­priced for what it is, which is ba­si­cally a group of smart peo­ple who know how to use pow­er­point pre­sen­ta­tions and speak in sooth­ing tones. You can just go through news clip­pings and find ar­eas McKinsey did cookie cut­ter non­sense. For in­stance, McKinsey helped ruin an IT im­ple­men­ta­tion for in­tel­li­gence ser­vices. In the im­mi­gra­tion story, MacDougall shows that the con­sult­ing firm en­cour­aged ICE to give less food and med­ical care to de­tainees. That’s cru­elty, not ef­fi­ciency.

Still, it’s not all on McKinsey. The Industrial Funding Fee is one rea­son pay­ing $3 mil­lion a year for a 23-year old McKinsey em­ployee in­stead of hir­ing an ex­pe­ri­enced per­son di­rectly to do IT man­age­ment has some logic to a gov­ern­ment pro­cure­ment di­vi­sion head. The pol­icy so­lu­tion here is fairly sim­ple - kill the IFF fee struc­ture and fi­nance gov­ern­ment pro­cure­ment agen­cies through Congressional ap­pro­pri­a­tions di­rectly. Also fol­low the IGs rec­om­men­da­tion and can­cel McKinsey’s con­tract­ing sched­ule.

At any rate, at some point decades ago, we de­cided that most po­lit­i­cal and busi­ness in­sti­tu­tions in America should be or­ga­nized around cheat­ing peo­ple. In this case, the warped and de­crepit state of the GSA leads to McKinsey-ifying the en­tire gov­ern­ment. Mr. Clinton, you took a fine gov­ern­ment that ba­si­cally worked, and ru­ined it. McKinsey sends its thanks.

Thanks for read­ing. And if you liked this es­say, you can sign up here for more is­sues of BIG, a newslet­ter on how to re­store fair com­merce, in­no­va­tion and democ­racy. If you want to re­ally un­der­stand the se­cret his­tory of mo­nop­oly power, buy my book, Go­liath: The 100-Year War Between Monopoly Power and Democracy.

P. S. I’ll be giv­ing book talks in D.C. and New York. Info is be­low.

December 10 (Tuesday)

Washington, D.C.

Book talk at Public Citizen at 5pm

1600 20th St., NW, Washington, DC 20009

Please RSVP to Aileen Walsh at awalsh@cit­i­zen.org.

December 18 (Wednesday)

Brooklyn, NY - Old Stone House from 6:30-8pm

A Conversation with Matt Stoller, au­thor of Goliath”, and Zephyr Teachout

336 3rd Street

Brooklyn, 11215

INFO

...

Read the original on mattstoller.substack.com »

6 702 shares, 25 trendiness, 506 words and 4 minutes reading time

SwiftLaTeX/SwiftLaTeX

Try it here: https://​www.swift­la­tex.com/​oauth/​lo­gin_oauth?type=sand­box

SwiftLaTeX is a Web-browser based ed­i­tor to cre­ate PDF doc­u­ments such as re­ports, term pro­jects, slide decks, in the type­set­ting sys­tem LaTeX. In con­trast to other web-based ed­i­tors SwiftLaTeX is true WYSIWYG, What-you-see-is-what-you-get: You edit di­rectly in a rep­re­sen­ta­tion of the print out­put. You can im­port a LaTeX doc­u­ment at any stage of com­plete­ness into SwiftLaTeX. You can start a new doc­u­ment with SwiftLaTeX, or you can use SwiftLaTeX for fi­nal copy-edit­ing. For ad­vanced op­er­a­tion you can edit the so-called LaTeX source code, which gives you more fine-grained con­trol. SwiftLaTeX is col­lab­o­ra­tive; you can share your pro­ject with oth­ers and work on it at the same time. SwiftLaTeX stores your data in the cloud un­der your ac­count; cur­rently it sup­ports Google Drive and DropBox.

You are wel­come to host SwiftLaTeX by your­self ac­cord­ing to AGPL li­cence. And you can also use our web https://​www.swift­la­tex.com.

You first need to be a Google API Developer to re­trieve a Google API Client ID and Secret. See here (https://​de­vel­op­ers.google.com/​iden­tity/​pro­to­cols/​OAu­th2)

Edit con­fig.py and put your Client ID and Secret Inside. (You can use en­vi­ron­ment vari­ables in­stead.)

All pack­ages are dy­nam­i­cally loaded from our file server and cached lo­cally. Our file server has al­most all the pack­ages. If you want to host the file server by your­self, you can check­out an­other repo: https://​github.com/​el­liott-wen/​texlive-server

Currently, this en­gine is built atop pdf­tex. So no uni­code sup­ported. We are work­ing to port xe­tex in fu­ture re­lease. The en­gine source code is hosted in https://​github.com/​Swift­La­TeX/​Pdf­TeXLite. It is un­us­able so far as we need more time to up­load and tidy up the source codes. Stay tuned.

WYSIWYG

Formulas are ab­solute po­si­tioned, there­fore, the cor­rect dis­play only comes af­ter a com­pi­la­tion. Reductant spaces oc­curs be­tween words.

Slow Upload to Google

Our sys­tem ab­stracts your cloud stor­age as a POSIX-like file sys­tem to sim­plify user in­ter­face im­ple­men­ta­tion at the cost of a lit­tle bit per­for­mance. We are work­ing hard to im­prove our im­ple­men­ta­tion to re­duce the net­work turn around time.

As an open source pro­ject SwiftLaTeX strongly ben­e­fits from an ac­tive com­mu­nity. It is a good idea to an­nounce your plans on the is­sue list. So every­body knows what’s go­ing on and there is no du­pli­cate work.

The eas­i­est way to help on the de­vel­op­ment with SwiftLaTeX is to use it! Furthermore, if you like SwiftLaTeX, tell all your friends and col­leagues about it.

User feed­back is highly wel­come. If you wanna re­port bugs re­gard­ing some TeX doc­u­men­ta­tions not com­pil­ing. Please at­tach the snip­pets so that we can look into it.

If you are send­ing PR re­quests, you own the copy­right of your con­tri­bu­tion, and that you must agree to give us a li­cense to use it in both the open source ver­sion, and the ver­sion of SwiftLaTeX run­ning at www.swift­la­tex.com, which may in­clude ad­di­tional changes. For more de­tails, you can see https://​www.swift­la­tex.com/​con­tribute.html.

Thank you very much for your in­ter­est and sup­port:) We are very happy to re­ceive such a warm re­sponse. We are cur­rently over­whelmed by our full-time jobs at uni. But we will try our best to mon­i­tor this repo and keep im­prov­ing the code daily. Really ap­pre­ci­ate your pa­tience and wish you all a won­der­ful hol­i­day sea­son:)

If you are in­ter­ested in read­ing tech jar­gons, you could have a look at https://​dl.acm.org/​ci­ta­tion.cfm?id=3209522&dl=ACM&coll=DL

(Though some stuff in the pa­per is out­dated.)

...

Read the original on github.com »

7 597 shares, 22 trendiness, 570 words and 6 minutes reading time

Two malicious Python libraries caught stealing SSH and GPG keys

The Python se­cu­rity team re­moved two tro­janized Python li­braries from PyPI (Python Package Index) that were caught steal­ing SSH and GPG keys from the pro­jects of in­fected de­vel­op­ers.

The two li­braries were cre­ated by the same de­vel­oper and mim­ic­ked other more pop­u­lar li­braries — us­ing a tech­nique called ty­posquat­ting to reg­is­ter sim­i­larly-look­ing names.

The first is python3-dateutil,” which im­i­tated the pop­u­lar dateutil” li­brary. The sec­ond is jeIlyfish” (the first L is an I), which mim­ic­ked the jellyfish” li­brary.

The two ma­li­cious clones were dis­cov­ered on Sunday, December 1, by German soft­ware de­vel­oper Lukas Martini. Both li­braries were re­moved on the same day af­ter Martini no­ti­fied da­teu­til de­vel­op­ers and the PyPI se­cu­rity team.

While the python3-da­teu­til was cre­ated and up­loaded on PyPI two days be­fore, on November 29, the jeIly­fish li­brary had been avail­able for nearly a year, since December 11, 2018.

According to Martini, the ma­li­cious code was pre­sent only in the jeIly­fish li­brary. The python3-da­teu­til pack­age did­n’t con­tain ma­li­cious code of its own, but it did im­port the jeIly­fish li­brary, mean­ing it was ma­li­cious by as­so­ci­a­tion.

The code down­loaded and read a list of hashes stored in a GitLab repos­i­tory. The na­ture and pur­pose of these hashes was ini­tially un­known, as nei­ther Martini or the PyPI team de­tailed the be­hav­ior in great depth be­fore the li­brary was promptly re­moved from PyPI.

ZDNet asked to­day Paul Ganssle, a mem­ber of the da­teu­til dev team, to take a closer look at the ma­li­cious code and put it in per­spec­tive for our read­ers.

The code di­rectly in the `jeIlyfish` li­brary down­loads a file called hashsum’ that looks like non­sense from a git­lab repo, then de­codes that into a Python file and ex­e­cutes it,” Ganssle told ZDNet.

It looks like [this file] tries to ex­fil­trate SSH and GPG keys from a user’s com­puter and send them to this IP ad­dress: http://​68.183.212.246:32258.”

It also lists a bunch of di­rec­to­ries, home di­rec­tory, PyCharm Projects di­rec­tory,” Ganssle added. If I had to guess what the pur­pose of that is, I would say it’s to fig­ure out what pro­jects the cre­den­tials work for so that the at­tacker can com­pro­mise that per­son’s pro­jects.”

Both of the ma­li­cious li­braries were up­loaded on PyPI by the same de­vel­oper, who used the user­name of ol­gired2017 — also used for the GitLab ac­count.

It is be­lieved that ol­gired2017 cre­ated the da­teu­til clone in an at­tempt to cap­i­tal­ize on the orig­i­nal’s li­brary pop­u­lar­ity and in­crease the reach of the ma­li­cious code; how­ever, this also brought more at­ten­tion from more de­vel­op­ers and even­tu­ally ended up in ex­pos­ing his en­tire op­er­a­tion.

Excluding the ma­li­cious code, both ty­posquat­ted pack­ages were iden­ti­cal copies of the orig­i­nal li­braries, mean­ing they would have worked as the orig­i­nals.

Developers who did­n’t pay at­ten­tion to the li­braries they down­loaded or im­ported into their pro­jects should check to see if they’ve used the cor­rect pack­age names and did not ac­ci­den­tally use the ty­posquat­ted ver­sions.

If they ac­ci­den­tally used any of the two, de­vel­op­ers are ad­vised to change the all SSH and GPG keys they’ve used over the past year.

This is the third time the PyPI team in­ter­venes to re­move typo-squat­ted ma­li­cious Python li­braries from the of­fi­cial repos­i­tory. Similar in­ci­dents have hap­pened in September 2017 (ten li­braries), October 2018 (12 li­braries), and July 2019 (three li­braries).

Article up­dated one hour af­ter pub­li­ca­tion with Ganssle’s analy­sis.

...

Read the original on www.zdnet.com »

8 597 shares, 21 trendiness, 195 words and 2 minutes reading time

Take action now

Google wants to au­to­mate us

Find an al­ter­na­tive to Google Chrome which re­spects your de­ci­sion rights

Uninstall or dis­able Google Chrome on all your de­vices

Tell every­one you know to do the same — use the hash­tag #NoToChrome

We can no longer pre­tend that Google is a pos­i­tive force in the world.

There is a sim­ple first step that every in­ter­net user can take to make things a lit­tle bet­ter. Seek out a bet­ter web browser to re­place Google Chrome and tell every­one to do the same.

No to Chrome is de­signed as a start­ing point for any­one who uses the in­ter­net to send a mes­sage to Google that their re­lent­less dis­re­gard for our rights, dig­nity, democ­racy and com­mu­ni­ties will not be tol­er­ated.

There are many ways protest against Google rang­ing from Tweets to full boy­cotts but No to Chrome is de­signed to be for any­one who uses the in­ter­net to par­tic­i­pate eas­ily and im­me­di­ately.

Have a look at our Google prod­ucts pages to see the prob­lems with other Google prod­ucts and what you can do about them.

Use Apple prod­ucts and don’t use Chrome? Ensure Google is not your de­fault search en­gine in Safari.

...

Read the original on notochrome.org »

9 586 shares, 24 trendiness, 1772 words and 14 minutes reading time

Sinkholed

On 26 Nov 2019 at 14:55 UTC, I logged into my server that hosts my web­site to per­form a sim­ple main­te­nance ac­tiv­ity. Merely three min­utes later, at 14:58 UTC, the do­main name susam.in used to host this web­site was trans­ferred to an­other reg­is­trant with­out any au­tho­riza­tion by me or with­out any no­ti­fi­ca­tion sent to me. Since the DNS re­sults for this do­main name was cached on my sys­tem, I was un­aware of this is­sue at that time. It would take me three days to re­al­ize that I had lost con­trol of the do­main name I had been us­ing for my web­site for the last 12 years. This blog post doc­u­ments when this hap­pened, how this hap­pened, and what it took to re­gain con­trol of this do­main name.

On 29 Nov 2019 at 19:00 UTC, when I vis­ited my web­site hosted at https://​susam.in/, I found that a zero-byte file was be­ing served at this URL. My web­site was miss­ing. In fact, the do­main name re­solved to an IPv4 ad­dress I was un­fa­mil­iar with. It did not re­solve to the ad­dress of my Linode server any­more.

I checked the WHOIS records for this do­main name. To my as­ton­ish­ment, I found that I was no longer the reg­is­trant of this do­main. An en­tity named The Verden Public Prosecutor’s Office was the new reg­is­trant of this do­main. The WHOIS records showed that the do­main name was trans­ferred to this or­ga­ni­za­tion on 26 Nov 2019 at 14:58 UTC, merely three min­utes af­ter I had per­formed my main­te­nance ac­tiv­ity on the same day. Here is a snip­pet of the WHOIS records that I found:

Domain Name: susam.in

Registry Domain ID: D2514002-IN

Registrar WHOIS Server:

Registrar URL:

Updated Date: 2019-11-26T14:58:00Z

Creation Date: 2007-05-15T07:19:26Z

Registry Expiry Date: 2020-05-15T07:19:26Z

Registrar: NIXI Special Projects

Registrar IANA ID: 700066

Registrar Abuse Contact Email:

Registrar Abuse Contact Phone:

Domain Status: client­Trans­fer­Pro­hib­ited http://​www.icann.org/​epp#client­Trans­fer­Pro­hib­ited

Domain Status: server­Re­new­Pro­hib­ited http://​www.icann.org/​epp#server­Re­new­Pro­hib­ited

Domain Status: serverDeletePro­hib­ited http://​www.icann.org/​epp#serverDeletePro­hib­ited

Domain Status: serverUp­datePro­hib­ited http://​www.icann.org/​epp#serverUp­datePro­hib­ited

Domain Status: server­Trans­fer­Pro­hib­ited http://​www.icann.org/​epp#server­Trans­fer­Pro­hib­ited

Registry Registrant ID:

Registrant Name:

Registrant Organization: The Verden Public Prosecutor’s Office

Registrant Street:

Registrant Street:

Registrant Street:

Registrant City:

Registrant State/Province: Niedersachsen

Name Server: sc-c.sink­hole.shad­owserver.org

Name Server: sc-d.sink­hole.shad­owserver.org

Name Server: sc-a.sink­hole.shad­owserver.org

Name Server: sc-b.sink­hole.shad­owserver.org

The el­lip­sis de­notes some records I have omit­ted for the sake of brevity. There were three things that stood out in these records:

The reg­is­trar was changed from eNom, Inc. to NIXI Special Projects.

The reg­is­trant was changed from Susam Pal to The Verden Public

Prosecutor’s Office.

The name servers were changed from Linode’s servers to

Shadowserver’s sink­holes.

On search­ing more about the new reg­is­trant on the web, I re­al­ized that it was a German crim­i­nal jus­tice body that was in­volved in the take­down of the Avalanche

mal­ware-host­ing net­work. It took a four-year con­certed ef­fort by INTERPOL, Europol, the Shadowserver Foundation, Eurojust, the Luneberg Police, and sev­eral other in­ter­na­tional or­ga­ni­za­tions to fi­nally de­stroy the Avalanche bot­net on 30 Nov 2016. In this list of or­ga­ni­za­tions, one name caught my at­ten­tion im­me­di­ately: The Shadowserver Foundation. The WHOIS name server records pointed to Shadowserver’s sink­holes.

The fact that the do­main name was trans­ferred to an­other or­ga­ni­za­tion merely three min­utes af­ter I had per­formed a sim­ple main­te­nance ac­tiv­ity got me wor­ried. Was the do­main name hi­jacked? Did my main­te­nance ac­tiv­ity on the server have any­thing to do with it? What kind of at­tack one might have pulled off to hi­jack the do­main name? I checked all the logs and there was no ev­i­dence that any­one other than me had logged into the server or ex­e­cuted any com­mands or code on it. Further, a do­main name trans­fer usu­ally in­volves email no­ti­fi­ca­tion and au­tho­riza­tion. None of that had hap­pened. It in­creas­ingly looked like that the three minute in­ter­val be­tween the main­te­nance ac­tiv­ity and the do­main name trans­fer was merely a co­in­ci­dence.

More ques­tions sprang up as I thought about it. The Avalanche bot­net was de­stroyed in 2016. What has that got to do with the do­main name be­ing trans­ferred in 2019? Did my server some­how be­come part of the Avalanche bot­net? My server ran a min­i­mal in­stal­la­tion of the lat­est Debian GNU/Linux sys­tem. It was al­ways kept up-to-date to min­i­mize the risk of mal­ware in­fec­tion or se­cu­rity breach. It hosted a sta­tic web­site com­posed of sta­tic HTML files served with Nginx. I found no ev­i­dence of unau­tho­rized ac­cess of my server while in­spect­ing the logs. I could not find any mal­ware on the sys­tem.

The pres­ence of Shadowserver sink­hole name servers in the WHOIS records was a good clue. Sinkholing of a do­main name can be done both mal­i­cously as well as con­struc­tively. In this case, it looked like the Shadowserver Foundation in­tended to sink­hole the do­main name con­struc­tively, so that any mal­ware client try­ing to con­nect to my server ne­far­i­ously would end up con­nect­ing to a sink­hole ad­dress in­stead. My do­main name was sink­holed! The ques­tion now was: Why was it sink­holed?

On 29 Nov 2019 at 19:29 UTC, I sub­mit­ted a sup­port ticket to Namecheap to re­port this is­sue. At 21:05 UTC, I re­ceived a re­sponse from Namecheap sup­port that they have con­tacted Enom, their up­stream reg­is­trar, to dis­cuss the is­sue. There was no es­ti­mate for when a res­o­lu­tion might be avail­able.

At 21:21 UTC, I sub­mit­ted a do­main name

trans­fer com­plaint

to the Internet Corporation for Assigned Names and Numbers (ICANN). I was not ex­pect­ing any re­sponse from ICANN be­cause they do not have any con­trac­tual au­thor­ity on a coun­try code top-level do­main (ccTLD).

At 21:23 UTC, I emailed National Internet Exchange of India (NIXI). NIXI is the ccTLD man­ager for .IN do­main and they have au­thor­ity on it. I found their con­tact de­tails from the IANA Delegation

Record for .IN. Again, I was not ex­pect­ing a re­sponse from NIXI be­cause they do not have any con­trac­tual re­la­tion­ship di­rectly with me. They have a con­trac­tual re­la­tion­ship with Namecheap, so any com­mu­ni­ca­tion from them would be re­ceived by Namecheap and Namecheap would have to pass that on to me.

At 21:30 UTC, ICANN re­sponded and said that I should con­tact the ccTLD man­ager di­rectly. Like I ex­plained in the pre­vi­ous para­graph, I had al­ready done that, so there was noth­ing more for me to do ex­cept wait for Namecheap to pro­vide an up­date af­ter their in­ves­ti­ga­tion. By the way, NIXI never replied to my email.

On 30 Nov 2019 at 07:30 UTC, I shared this

is­sue on Twitter. I was hop­ing that some­one who had been through a sim­i­lar ex­pe­ri­ence could of­fer some ad­vice. In fact, soon af­ter I posted the tweet, a kind per­son named Max from Germany gen­er­ously of­fered to

help by writ­ing a let­ter in German ad­dressed to the new reg­is­trant which was a German or­ga­ni­za­tion. The rea­son for sinkhol­ing my do­main name was still un­clear. I hoped that with enough num­ber of retweets some­one closer to the source of truth could shed some light on why and how this hap­pened.

At 09:54 UTC, Richard Kirkendall, founder and CEO of Namecheap, re­sponded

to my tweet and in­formed that they were con­tact­ing NIXI re­gard­ing the is­sue. This seemed like a good step to­wards res­o­lu­tion. After all, the do­main name was no longer un­der their up­stream reg­is­trar named Enom. The do­main name was now with NIXI as ev­i­dent from the WHOIS

records.

Several other users tweeted about my is­sue, added more in­for­ma­tion about what might have hap­pened, and retweeted my tweet.

On 1 Dec 2019 at 11:48 UTC, Benedict Addis from the Shadowserver Foundation con­tacted me by email. He said that they had be­gun look­ing into this is­sue as soon as one of the tweets about this is­sue had re­ferred to their or­ga­ni­za­tion. He ex­plained in his email that my do­main name was sink­holed ac­ci­den­tally as part of their Avalanche op­er­a­tion. Although it is now three years since the ini­tial take­down of the bot­net, they still see over 3.5 mil­lion unique IP ad­dresses con­nect­ing to their sink­holes every­day. Unfortunately, their op­er­a­tion in­ad­ver­tently flagged my do­main name as one of the do­main names to be sink­holed be­cause it matched the pat­tern of com­mand and con­trol (C2) do­main names gen­er­ated by a mal­ware fam­ily named Nymaim, one of the mal­ware fam­i­lies hosted on Avalanche. Although, they had va­lid­ity checks to avoid sinkhol­ing false-pos­i­tives, my do­main name un­for­tu­nately slipped through those checks. Benedict men­tioned that he had just raised this is­sue with NIXI and re­quested them to re­turn the do­main name to me as soon as pos­si­ble.

On 2 Dec 2019 at 04:00 UTC, when I looked up the WHOIS records for the do­main name, I found that it had been re­turned to me al­ready. At 08:37 UTC, Namecheap sup­port re­sponded to my sup­port ticket to say that they had been in­formed that NIXI had re­turned the do­main name to its orig­i­nal state. At 09:55 UTC, Juliya Zinovjeva, Domains Product Manager of Namecheap, com­mented

on Twitter and con­firmed the same thing.

Despite the suc­cess­ful res­o­lu­tion, it was still quite un­set­tling that a do­main name could be trans­ferred to an­other reg­is­trant and sink­holed for some per­ceived vi­o­la­tion. I thought there would be more checks in place to con­firm that a per­ceived vi­o­la­tion was real be­fore a do­main name could be trans­ferred. Losing a do­main name I have been us­ing ac­tively for 12 years was an un­pleas­ant ex­pe­ri­ence. Losing a do­main name ac­ci­den­tally should have been a lot harder than this. Benedict from the Shadowserver Foundation as­sured me that my do­main name would be ex­cluded from fu­ture sinkhol­ing for this par­tic­u­lar case. However, the pos­si­bil­ity that this could hap­pen again due to an­other un­re­lated op­er­a­tion by an­other or­ga­ni­za­tion in fu­ture is dis­con­cert­ing.

I also won­dered if a do­main name un­der a coun­try code top-level do­main (ccTLD) like .in is more sus­cep­ti­ble to this kind of sinkhol­ing than a do­main name un­der a generic top-level do­main (gTLD) like .com. I asked Benedict if it is worth mi­grat­ing my web­site from .in to .com. He replied that in his per­sonal opin­ion, NIXI runs an ex­cel­lent, clean reg­istry, and are very re­spon­sive in re­solv­ing is­sues when they arise. He also added that do­main gen­er­a­tion al­go­rithms (DGAs) of mal­ware are equally, and pos­si­bly more, prob­lem­atic for .com do­mains. He ad­vised against mi­grat­ing my web­site.

Thanks to every­one who retweeted my tweet on this is­sue. Also, thanks to Richard Kirkendall (CEO of Namecheap), Namecheap sup­port team, and Benedict Addis from the Shadowserver Foundation for con­tact­ing NIXI to re­solve this is­sue.

...

Read the original on susam.in »

10 577 shares, 22 trendiness, 511 words and 4 minutes reading time

Lucid Index

Duet - Self hosted pro­ject man­age­ment, in­voic­ing and col­lab­o­ra­tion

Selfhost web apps, mi­croser­vices and lamb­das on your server. Advanced fea­tures en­able ser­vice reuse and com­po­si­tion.

Updated 8 months ago

The most re­cent re­lease v0.0.1 was pub­lished 1 year ago and has a sta­tus of sta­ble

Updated 6 months ago

The most re­cent re­lease v1.5.0 was pub­lished 1 year ago and has a sta­tus of sta­ble

Updated 1 month ago

The most re­cent re­lease 1.3.1 was pub­lished 6 years ago and has a sta­tus of sta­ble

Automatically gen­er­ate mul­ti­ple nat­ural lan­guage de­scrip­tions of your data vary­ing in word­ing and struc­ture.

Webbased, open source ap­pli­ca­tion for stan­dard­s­based archival de­scrip­tion and ac­cess in a mul­ti­lin­gual, mul­ti­repos­i­tory en­vi­ron­ment.

Updated 6 days ago

The most re­cent re­lease v1.3.2 was pub­lished 4 years ago and has a sta­tus of sta­ble

Selfhosted an­a­lyt­ics tool for those who care about pri­vacy.

Updated 2 weeks ago

The most re­cent re­lease v1.4.1 was pub­lished 3 weeks ago and has a sta­tus of sta­ble

An in­tel­li­gent process and work­flow au­toma­tion plat­form based on soft­ware agents.

Updated 1 month ago

The most re­cent re­lease v0.9.5.1 was pub­lished 1 month ago and has a sta­tus of sta­ble

Admidio is a free open source user man­age­ment sys­tem for web­sites of or­ga­ni­za­tions and groups. The sys­tem has a flex­i­ble role model so that it’s pos­si­ble to re­flect the struc­ture and per­mis­sions of your or­ga­ni­za­tion.

Updated 2 days ago

The most re­cent re­lease v3.3.11 was pub­lished 4 months ago and has a sta­tus of sta­ble

Fast and easy­touse web­mail fron­tend for your ex­ist­ing IMAP mail server, Plesk or cPanel.

Updated 1 year ago

The most re­cent re­lease 2.2.0 was pub­lished 2 years ago and has a sta­tus of sta­ble

Opensource web­based me­dia streamer and juke­box. A fork of Subsonic’s last open­source re­lease, be­fore it switched li­censes.

Updated 6 days ago

The most re­cent re­lease v10.5.0 was pub­lished 1 month ago and has a sta­tus of sta­ble

Akaunting is a free, on­line and open source ac­count­ing soft­ware de­signed for small busi­nesses and free­lancers.

Updated 1 day ago

The most re­cent re­lease 2.0.1 was pub­lished 2 weeks ago and has a sta­tus of sta­ble

AlertHub is a sim­ple tool to get alerted from GitHub re­leases.

Updated 3 months ago

The most re­cent re­lease v1.2.1 was pub­lished 1 year ago and has a sta­tus of sta­ble

Updated 3 days ago

The most re­cent re­lease 2.0-M2 was pub­lished 1 month ago and has a sta­tus of sta­ble

Web in­ter­face for , a pro­gram to down­load videos and au­dio from .

Updated 2 days ago

The most re­cent re­lease 2019.11.28 was pub­lished 1 week ago and has a sta­tus of sta­ble

Updated 4 weeks ago

The most re­cent re­lease v2.1.18 was pub­lished 1 year ago and has a sta­tus of sta­ble

Self hosted pro­ject man­age­ment, in­voic­ing, and col­lab­o­ra­tion. Duet pro­vides ever

An open, ex­ten­si­ble, wiki for your team built us­ing React and Node.js.

...

Read the original on selfhostedsource.tech »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.