10 interesting stories served every morning and every evening.

1 1,923 shares, 81 trendiness, 0 words and 0 minutes reading time



Read the original on musclewiki.com »

2 1,493 shares, 60 trendiness, 1493 words and 14 minutes reading time

A fresh new avenue for Google to kill your SaaS startup

If you are here in a panic be­cause Google Safe Browsing has black­listed your web­site or SaaS, skip ahead to the sec­tion de­scrib­ing how to han­dle the sit­u­a­tion. There’s also a lot of very in­ter­est­ing com­ments on the Hacker News com­ments page.

In the old days, when Google (or any poorly tuned AI that Google un­leashed) de­cided it wanted to kill your busi­ness, it would usu­ally re­sort to deny­ing ac­cess to one of its mul­ti­ple walled gar­dens, and that was that. You’ve prob­a­bly heard the hor­ror sto­ries:

They all fit the same mold. First, a busi­ness, by choice, uses Google ser­vices in a way that makes its sur­vival en­tirely de­pen­dent on them. Second, Google, be­ing the au­to­mated be­he­moth that Google is, does its thing: it ever so slightly ad­justs the po­si­tion of its own butt on its planet sized leather arm­chair, and, with­out re­ally notic­ing, crushes a myr­iad of (relatively) ant-sized busi­nesses in the process. Third, and fi­nally, the ant-sized busi­nesses des­per­ately try to in­form Google that they are be­ing crushed, but they can only reach an au­to­mated sug­ges­tions box.

Sometimes, the ant-sized CEO knows a higher up at Google be­cause they were col­lege bud­dies, or the CTO writes an ant-sized Medium post that some­how makes it to the front page of Hacker News mound. Then Google no­tices the ant-sized prob­lem and some­times deems it wor­thy of solv­ing, usu­ally for fear of reg­u­la­tory reper­cus­sions that the ant rev­o­lu­tion might en­tail.

For this rea­son, con­ven­tional ant-sized wis­dom dic­tates that if pos­si­ble, you should not build your busi­ness to be overly re­liant on Google’s ser­vices. And if you man­age to avoid de­pend­ing on Google’s mul­ti­ple walled gar­dens to sur­vive, you will prob­a­bly be OK.

In to­day’s episode of the Internet is not what it used to be”, let’s talk about a fresh new av­enue for Google to in­ad­ver­tently crush your startup that does not re­quire you to use Google ser­vices in any (deliberate) way.

Did you know that it’s pos­si­ble for your site’s do­mains to be black­listed by Google for no par­tic­u­lar rea­son, and that this black­list is not only en­forced di­rectly in Google Chrome, but also by sev­eral other soft­ware and hard­ware ven­dors? Did you know that these other ven­dors syn­chro­nize this list with wildly vari­able tim­ings and in­ter­pre­ta­tions, in a way that can make fix­ing any is­sues ex­tremely stress­ful and un­pre­dictable? Did you know that Google’s ETA for re­view­ing a black­list re­port, no mat­ter how in­valid, is mea­sured in weeks?

This black­list feature” is called Google Safe Browsing, and the im­age here de­picts the sub­tle mes­sage your users will see if one of your do­mains hap­pens to be flagged in the Safe Browsing data­base. Warning texts range from deceptive site ahead” to the site ahead con­tains mal­ware” (see here for a full list), but they all share an equally scary red back­ground de­sign, and bor­der­line im­pos­si­ble UI for peo­ple to skip the warn­ing and use the site any­way.

The first time we ex­pe­ri­enced this is­sue, we learned about it from a surge of cus­tomer re­ports that said that they were see­ing the red warn­ing page when try­ing to use our SaaS. The sec­ond time, we were bet­ter pre­pared and there­fore had some free time to write this post.

For con­text, InvGate (our com­pany) is a SaaS plat­form for IT de­part­ments that runs on AWS with over 1000 SME and en­ter­prise cus­tomers, serv­ing mil­lions of end users. This means our prod­uct is used by IT teams to man­age is­sues and re­quests from their own users. You can imag­ine the pleas­ant re­ac­tion of IT Managers when sud­denly their IT tick­et­ing sys­tem starts dis­play­ing such omi­nous se­cu­rity warn­ings to their end users.

When we first bumped into this prob­lem, we fran­ti­cally tried to un­der­stand what was go­ing on and learn­ing how Google Safe Browsing (GSB from now on) worked while our tech­ni­cal sup­port team tried to keep up with cus­tomers re­port­ing the is­sue. We quickly re­al­ized an Amazon Cloudfront CDN URL that we used to serve sta­tic as­sets (CSS, Javascript and other me­dia) had been flagged and this was caus­ing our en­tire ap­pli­ca­tion to fail for the cus­tomer in­stances that were us­ing that par­tic­u­lar CDN. A quick re­view of the al­legedly af­fected sys­tem showed that every­thing ap­peared nor­mal.

While our DevOps team was work­ing in full emer­gency mode to get a new CDN set up and prepar­ing to move cus­tomers over onto a new do­main, I found that Google’s doc­u­men­ta­tion claims that GSB pro­vides ad­di­tional ex­pla­na­tions about why a site has been flagged in the Google Search Console (GSC from now on) of the of­fend­ing site. I won’t bore you with the de­tails, but in or­der to ac­cess this in­for­ma­tion, you have to claim own­er­ship of the site in GSC, which re­quires you to set up a cus­tom DNS record or up­load some files onto the root of the of­fend­ing do­main. We scram­bled to do ex­actly that and af­ter 20 min­utes, man­aged to find the re­port about our site.

The re­port looked some­thing like this:

The re­port also con­tained a Request Review” but­ton that I promptly clicked with­out ac­tu­ally tak­ing any ac­tion on the site, since there was no in­for­ma­tion what­so­ever about the al­leged prob­lem. I filed for a re­view with a mes­sage not­ing that there were no of­fend­ing URLs listed, de­spite doc­u­men­ta­tion in­di­cat­ing that ex­am­ple URLs are al­ways be pro­vided by Google to as­sist web­mas­ters in iden­ti­fy­ing is­sues.

Around an hour later, and be­fore we had fin­ished mov­ing cus­tomers out of that CDN, our site was cleared from the GSB data­base. I re­ceived an au­to­mated email con­firm­ing that the re­view had been suc­cess­ful around 2 hours af­ter that fact. No clar­i­fi­ca­tion was given about what caused the prob­lem in the first place.

Over the week that fol­lowed this in­ci­dent, and de­spite hav­ing had our URL cleared from the Safe Browsing black­list, we con­tin­ued to re­ceive spo­radic re­ports of com­pa­nies hav­ing trou­ble to ac­cess our sys­tems.

Google Safe Browsing pro­vides two dif­fer­ent APIs for both com­mer­cial and non-com­mer­cial soft­ware de­vel­op­ers to use the black­list in their prod­ucts. In par­tic­u­lar, we iden­ti­fied that at least some cus­tomers us­ing Firefox were also run­ning into is­sues, and both an­tivirus/​an­ti­mal­ware soft­ware and net­work-wide se­cu­rity ap­pli­ances from cus­tomers were also flag­ging our site and pre­vent­ing users from ac­cess­ing it many days af­ter the is­sue had been re­solved.

We con­tin­ued to move all the cus­tomers off the for­merly black­listed CDN and onto a new one, and the is­sue was there­fore re­solved for good. We never prop­erly es­tab­lished the cause of the is­sue, but we chalked it up to some AI trip­ping on acid at Google’s HQ.

My 2 cents: If you run a SaaS busi­ness with an avail­abil­ity SLA, get­ting flagged by Google Safe Browsing for no par­tic­u­lar rea­son rep­re­sents a very real risk to busi­ness con­ti­nu­ity.

Sadly, given the oh-so-Goo­gly opac­ity of the mech­a­nism for flag­ging and re­view­ing sites, I don’t think there is a way you can fully pre­vent this from hap­pen­ing to you. But you can cer­tainly ar­chi­tect your app and processes to min­i­mize the chances of it hap­pen­ing, lower the im­pact of ac­tu­ally be­ing flagged, and min­i­mize the time needed to cir­cum­vent the is­sue if it arises.

Here are the steps we are tak­ing, and I there­fore rec­om­mend:

* Don’t keep all your eggs in one bas­ket, do­main wise. GSB ap­pears to flag en­tire do­mains or sub­do­mains. For that rea­son, it’s a good idea to spread your ap­pli­ca­tions over mul­ti­ple do­mains, as that will re­duce the im­pact of any sin­gle do­main get­ting flagged. For ex­am­ple: com­pany.com for your web­site, app.com­pany.net for your ap­pli­ca­tion, eu­cdn.com­pany.net for cus­tomers in Europe, use­ast­cdn.com­pany.net for cus­tomers in the US East coast, etc.

* Don’t host any cus­tomer gen­er­ated data in your main do­mains. A lot of the cases of black­list­ing that I found while re­search­ing this is­sue were caused by SaaS cus­tomers un­know­ingly up­load­ing ma­li­cious files onto servers. Those files are harm­less to the sys­tems them­selves, but their very ex­is­tence can cause the whole do­main to be black­listed. Anything that your users up­load onto your apps should be hosted out­side your main do­mains. For ex­am­ple: use com­pa­nyuser­con­tent.com to store files up­loaded by cus­tomers.

* Proactively claim own­er­ship of all your pro­duc­tion do­mains in Google Search Console. If you do, that won’t pre­vent your site from be­ing black­listed, but you will get an email as it hap­pens which will al­low you to re­act quickly to the is­sue. It takes a lit­tle while to do, and it’s pre­cious time when you are ac­tu­ally deal­ing with an in­ci­dent of this sort that is im­pact­ing your cus­tomers.

* Be ready to jump do­mains if you need to. This is the hard­est thing to do, but it’s the only ef­fec­tive tool against be­ing black­listed: en­gi­neer your sys­tems so that their ref­er­enced ser­vice do­main names can eas­ily be mod­i­fied (by hav­ing scripts or or­ches­tra­tion tools avail­able to per­form this change), and pos­si­bly even have al­ter­na­tive names avail­able and stand­ing by. For ex­am­ple, have eu­cdn.com­pa­ny2.net be a CNAME for eu­cdn.com­pany.net, and if the first do­main is blocked up­date the con­fig­u­ra­tion of your app to load its as­sets from the al­ter­nate do­main by us­ing a tool.

* If you can eas­ily and quickly switch your app to a dif­fer­ent do­main name, that is the only thing that will re­li­ably, quickly and pseudo-de­fin­i­tively re­solve the in­ci­dent. If pos­si­ble, do that. You’re done.

* Failing that, once you iden­tify the blocked do­main, re­view the re­ports that ap­pear on Google Search Console. If you had not claimed own­er­ship of the do­main be­fore this point, you will have to do it right now, which will take a while.

* If your site has ac­tu­ally been hacked, fix the is­sue (i.e. delete of­fend­ing con­tent or hacked pages) and then re­quest a se­cu­rity re­view. If your site has not been hacked or the Safe Browsing re­port is non­sen­si­cal, re­quest a se­cu­rity re­view any­way and state that the re­port is in­com­plete.

* Then, in­stead of wait­ing in agony, as­sum­ing that down­time is crit­i­cal for your sys­tem or busi­ness, get to work on mov­ing to a new do­main name any­way. The re­view might take weeks.

The sec­ond time around, months af­ter the first in­ci­dent, we re­ceived an email from the Search Console warn­ing us that one of our do­mains had been flagged. A few hours af­ter this ini­tial email re­port, be­ing a G Suite do­main ad­min­is­tra­tor, I re­ceived an­other in­ter­est­ing email, which you can read be­low.

Let me sum­ma­rize what that is, be­cause it’s quite mind blow­ing. This email refers to the Search Console black­list alert emails. What this sec­ond e-mail says is that G Suite’s au­to­mated phish­ing e-mail fil­ter thinks Google Search Console’s email about our do­main be­ing black­listed is fake. It most cer­tainly is not, since our do­main was in­deed black­listed when we re­ceived the email. So Google can’t even de­cide whether its own email alerts about phish­ing are phish­ing. (LOL? 🤔)

It’s very clear to any­one work­ing in tech that large cor­po­rate tech­nol­ogy be­he­moths are to a great ex­tent, gate­keep­ers of the Internet. But I tend to in­ter­pret that in a loose, metaphor­i­cal way. The Safe Browsing in­ci­dent de­scribed in this post made it very clear that Google lit­er­ally con­trols who can ac­cess your web­site, no mat­ter where and how you op­er­ate it. With Chrome hav­ing around 70% mar­ket share, and both Firefox and Safari us­ing the GSB data­base to some ex­tent, Google can with a flick of a bit sin­gle­hand­edly make any site vir­tu­ally in­ac­ces­si­ble on the Internet.

This is an ex­tra­or­di­nary amount of power, and one that is not suit­able for Google’s an AI will re­view your prob­lem when and if it finds it con­ve­nient to do so” ap­proach.


Read the original on gomox.medium.com »

3 1,394 shares, 47 trendiness, 791 words and 7 minutes reading time

why we had to change Elastic licensing

We re­cently an­nounced a li­cense change: Blog, FAQ. We posted some ad­di­tional guid­ance on the li­cense change this morn­ing. I wanted to share why we had to make this change.

This was an in­cred­i­bly hard de­ci­sion, es­pe­cially with my back­ground and his­tory around Open Source. I take our re­spon­si­bil­ity very se­ri­ously. And to be clear, this change most likely has zero ef­fect on you, our users. It has no ef­fect on our cus­tomers that en­gage with us ei­ther in cloud or on premises. Its goal, hope­fully, is pretty clear.

So why the change? AWS and Amazon Elasticsearch Service. They have been do­ing things that we think are just NOT OK since 2015 and it has only got­ten worse. If we don’t stand up to them now, as a suc­cess­ful com­pany and leader in the mar­ket, who will?

Our li­cense change is aimed at pre­vent­ing com­pa­nies from tak­ing our Elasticsearch and Kibana prod­ucts and pro­vid­ing them di­rectly as a ser­vice with­out col­lab­o­rat­ing with us.

Our li­cense change comes af­ter years of what we be­lieve to be Amazon/AWS mis­lead­ing and con­fus­ing the com­mu­nity - enough is enough.

We’ve tried every av­enue avail­able in­clud­ing go­ing through the courts, but with AWSs on­go­ing be­hav­ior, we have de­cided to change our li­cense so that we can fo­cus on build­ing prod­ucts and in­no­vat­ing rather than lit­i­gat­ing.

AWSs be­hav­ior has forced us to take this step and we do not do so lightly. If they had not acted as they have, we would not be hav­ing this dis­cus­sion to­day.

We think that Amazon’s be­hav­ior is in­con­sis­tent with the norms and val­ues that are es­pe­cially im­por­tant in the open source ecosys­tem. Our hope is to take our pres­ence in the mar­ket and use it to stand up to this now so oth­ers don’t face these same is­sues in the fu­ture.

In the open source world, trade­marks are con­sid­ered a great and pos­i­tive way to pro­tect prod­uct rep­u­ta­tion. Trademarks have been used and en­forced broadly. They are con­sid­ered sa­cred by the open source com­mu­nity, from small pro­jects to foun­da­tions like Apache to com­pa­nies like RedHat. So imag­ine our sur­prise when Amazon launched their ser­vice in 2015 based on Elasticsearch and called it Amazon Elasticsearch Service. We con­sider this to be a pretty ob­vi­ous trade­mark vi­o­la­tion. NOT OK.

I took a per­sonal loan to reg­is­ter the Elasticsearch trade­mark in 2011 be­liev­ing in this norm in the open source ecosys­tem. Seeing the trade­mark so bla­tantly mis­used was es­pe­cially painful to me. Our ef­forts to re­solve the prob­lem with Amazon failed, forc­ing us to file a law­suit. NOT OK.

We have seen that this trade­mark is­sue dri­ves con­fu­sion with users think­ing Amazon Elasticsearch Service is ac­tu­ally a ser­vice pro­vided jointly with Elastic, with our bless­ing and col­lab­o­ra­tion. This is just not true. NOT OK.

When the ser­vice launched, imag­ine our sur­prise when the Amazon CTO tweeted that the ser­vice was re­leased in col­lab­o­ra­tion with us. It was not. And over the years, we have heard re­peat­edly that this con­fu­sion per­sists. NOT OK.

When Amazon an­nounced their Open Distro for Elasticsearch fork, they used code that we be­lieve was copied by a third party from our com­mer­cial code and pro­vided it as part of the Open Distro pro­ject. We be­lieve this fur­ther di­vided our com­mu­nity and drove ad­di­tional con­fu­sion.

More on this here. NOT OK.

Recently, we found more ex­am­ples of what we con­sider to be eth­i­cally chal­lenged be­hav­ior. We have dif­fer­en­ti­ated with pro­pri­etary fea­tures, and now we see these fea­ture de­signs serv­ing as inspiration” for Amazon, telling us their be­hav­ior con­tin­ues and is more brazen. NOT OK.

We col­lab­o­rate with cloud ser­vice providers, in­clud­ing Microsoft, Google, Alibaba, Tencent, Clever Cloud, and oth­ers. We have shown we can find a way to do it. We even work with other parts of Amazon. We are al­ways open to do­ing that; it just needs to be OK.

I be­lieve in the core val­ues of the Open Source Community: trans­parency, col­lab­o­ra­tion, open­ness. Building great prod­ucts to the ben­e­fit of users across the world. Amazing things have been built and will con­tinue to be built us­ing Elasticsearch and Kibana.

And to be clear, this change most likely has zero ef­fect on you, our users. And no ef­fect on our cus­tomers that en­gage with us ei­ther in cloud or on premises.

We cre­ated Elasticsearch; we care about it more than any­one else. It is our life’s work. We will wake up every day and do more to move the tech­nol­ogy for­ward and in­no­vate on your be­half.

Thanks for lis­ten­ing. If you have more ques­tions or you want more clar­i­fi­ca­tion please read here or con­tact us at elas­tic_li­cense@elas­tic.co.

Thank you. It is a priv­i­lege to be on this jour­ney with you.


Read the original on www.elastic.co »

4 1,103 shares, 44 trendiness, 1668 words and 14 minutes reading time

IPFS Support in Brave

Over the past sev­eral months, the Brave team has been work­ing with Protocol Labs on adding InterPlanetary File System (IPFS) sup­port in Brave. This is the first deep in­te­gra­tion of its kind and we’re very proud to out­line how it works in this post.

IPFS is an ex­cit­ing tech­nol­ogy that can help con­tent cre­ators dis­trib­ute con­tent with­out high band­width costs, while tak­ing ad­van­tage of data dedu­pli­ca­tion and data repli­ca­tion. There are per­for­mance ad­van­tages for load­ing con­tent over IPFS by lever­ag­ing its ge­o­graph­i­cally dis­trib­uted swarm net­work. IPFS is im­por­tant for blockchain and for self de­scribed data in­tegrity. Previously viewed con­tent can even be ac­cessed of­fline with IPFS! The IPFS net­work gives ac­cess to con­tent even if it has been cen­sored by cor­po­ra­tions and na­tion-states, such as for ex­am­ple, parts of Wikipedia.

IPFS sup­port al­lows Brave desk­top users to down­load con­tent by us­ing a con­tent hash, known as the Content iden­ti­fier (CID). Unlike HTTP(S), there is no spec­i­fied lo­ca­tion for the con­tent.

Each node in the IPFS net­work is a po­ten­tial host for the con­tent be­ing re­quested, and if a node does­n’t have the con­tent be­ing re­quested, the node can re­trieve the con­tent from the swarm of peers. The re­trieved con­tent is ver­i­fied lo­cally, re­mov­ing the need to trust a third par­ty’s in­tegrity.

HTTP(S) uses Uniform Resource Locators (URLs) to spec­ify the lo­ca­tion of con­tent. This sys­tem can be eas­ily cen­sored since the con­tent is hosted in spe­cific lo­ca­tions on be­half of a sin­gle en­tity and it is sus­cep­ti­ble to Denial of Service Attacks (DDoS). IPFS iden­ti­fies its con­tent by con­tent paths and/​or CIDs in­side of Uniform Resource Identifier (URIs) but not URLs.


Read the original on brave.com »

5 1,039 shares, 67 trendiness, 1000 words and 9 minutes reading time

Stepping up for a truly open source Elasticsearch

Last week, Elastic an­nounced they will change their soft­ware li­cens­ing strat­egy, and will not re­lease new ver­sions of Elasticsearch and Kibana un­der the Apache License, Version 2.0 (ALv2). Instead, new ver­sions of the soft­ware will be of­fered un­der the Elastic License (which lim­its how it can be used) or the Server Side Public License (which has re­quire­ments that make it un­ac­cept­able to many in the open source com­mu­nity). This means that Elasticsearch and Kibana will no longer be open source soft­ware. In or­der to en­sure open source ver­sions of both pack­ages re­main avail­able and well sup­ported, in­clud­ing in our own of­fer­ings, we are an­nounc­ing to­day that AWS will step up to cre­ate and main­tain a ALv2-licensed fork of open source Elasticsearch and Kibana.

We launched Open Distro for Elasticsearch in 2019 to pro­vide cus­tomers and de­vel­op­ers with a fully fea­tured Elasticsearch dis­tri­b­u­tion that pro­vides all of the free­doms of ALv2-licensed soft­ware. Open Distro for Elasticsearch is a 100% open source dis­tri­b­u­tion that de­liv­ers func­tion­al­ity prac­ti­cally every Elasticsearch user or de­vel­oper needs, in­clud­ing sup­port for net­work en­cryp­tion and ac­cess con­trols. In build­ing Open Distro, we fol­lowed the rec­om­mended open source de­vel­op­ment prac­tice of upstream first.” All changes to Elasticsearch were sent as up­stream pull re­quests (#42066, #42658, #43284, #43839, #53643, #57271, #59563, #61400, #64513), and we then in­cluded the oss” builds of­fered by Elastic in our dis­tri­b­u­tion. This en­sured that we were col­lab­o­rat­ing with the up­stream de­vel­op­ers and main­tain­ers, and not cre­at­ing a fork” of the soft­ware.

Choosing to fork a pro­ject is not a de­ci­sion to be taken lightly, but it can be the right path for­ward when the needs of a com­mu­nity di­verge—as they have here. An im­por­tant ben­e­fit of open source soft­ware is that when some­thing like this hap­pens, de­vel­op­ers al­ready have all the rights they need to pick up the work them­selves, if they are suf­fi­ciently mo­ti­vated. There are many suc­cess sto­ries here, like Grafana emerg­ing from a fork of Kibana 3.

When AWS de­cides to of­fer a ser­vice based on an open source pro­ject, we en­sure that we are equipped and pre­pared to main­tain it our­selves if nec­es­sary. AWS brings years of ex­pe­ri­ence work­ing with these code­bases, as well as mak­ing up­stream code con­tri­bu­tions to both Elasticsearch and Apache Lucene, the core search li­brary that Elasticsearch is built on—with more than 230 Lucene con­tri­bu­tions in 2020 alone.

Our forks of Elasticsearch and Kibana will be based on the lat­est ALv2-licensed code­bases, ver­sion 7.10. We will pub­lish new GitHub repos­i­to­ries in the next few weeks. In time, both will be in­cluded in the ex­ist­ing Open Distro dis­tri­b­u­tions, re­plac­ing the ALv2 builds pro­vided by Elastic. We’re in this for the long haul, and will work in a way that fos­ters healthy and sus­tain­able open source prac­tices—in­clud­ing im­ple­ment­ing shared pro­ject gov­er­nance with a com­mu­nity of con­trib­u­tors.

You can rest as­sured that nei­ther Elastic’s li­cense change, nor our de­ci­sion to fork, will have any neg­a­tive im­pact on the Amazon Elasticsearch Service (Amazon ES) you cur­rently en­joy. Today, we of­fer 18 ver­sions of Elasticsearch on Amazon ES, and none of these are af­fected by the li­cense change.

In the fu­ture, Amazon ES will be pow­ered by the new fork of Elasticsearch and Kibana. We will con­tinue to de­liver new fea­tures, fixes, and en­hance­ments. We are com­mit­ted to pro­vid­ing com­pat­i­bil­ity to elim­i­nate any need to up­date your client or ap­pli­ca­tion code. Just as we do to­day, we will pro­vide you with a seam­less up­grade path to new ver­sions of the soft­ware.

This change will not slow the ve­loc­ity of en­hance­ments we of­fer to our cus­tomers. If any­thing, a com­mu­nity-owned Elasticsearch code­base pre­sents new op­por­tu­ni­ties for us to move faster in im­prov­ing sta­bil­ity, scal­a­bil­ity, re­siliency, and per­for­mance.

Developers em­brace open source soft­ware for many rea­sons, per­haps the most im­por­tant be­ing the free­dom to use that soft­ware where and how they wish.

The term open source” has had a spe­cific mean­ing since it was coined in 1998. Elastic’s as­ser­tions that the SSPL is free and open” are mis­lead­ing and wrong. They’re try­ing to claim the ben­e­fits of open source, while chip­ping away at the very de­f­i­n­i­tion of open source it­self. Their choice of SSPL be­lies this. SSPL is a non-open source li­cense de­signed to look like an open source li­cense, blur­ring the lines be­tween the two. As the Fedora com­mu­nity states, [to] con­sider the SSPL to be Free’ or Open Source’ causes [a] shadow to be cast across all other li­censes in the FOSS ecosys­tem.”

In April 2018, when Elastic co-min­gled their pro­pri­etary li­censed soft­ware with the ALv2 code, they promised in We Opened X-Pack”: We did not change the li­cense of any of the Apache 2.0 code of Elasticsearch, Kibana, Beats, and Logstash — and we never will.” Last week, af­ter reneg­ing on this promise, Elastic up­dated that same page with a foot­note that says circumstances have changed.”

Elastic knows what they’re do­ing is fishy. The com­mu­nity has told them this (e.g., see Brasseur, Quinn, DeVault, and Jacob). It’s also why they felt the need to write an ad­di­tional blus­tery blog (on top of their ini­tial li­cense change blog) to try to ex­plain their ac­tions as AWS made us do it.” Most folks aren’t fooled. We did­n’t make them do any­thing. They be­lieve that re­strict­ing their li­cense will lock oth­ers out of of­fer­ing man­aged Elasticsearch ser­vices, which will let Elastic build a big­ger busi­ness. Elastic has a right to change their li­cense, but they should also step up and own their own de­ci­sion.

In the mean­time, we’re ex­cited about the long-term jour­ney we’ve em­barked on with Open Distro for Elasticsearch. We look for­ward to pro­vid­ing a truly open source op­tion for Elasticsearch and Kibana us­ing the ALv2 li­cense, and build­ing and sup­port­ing this fu­ture with the com­mu­nity.

An ear­lier ver­sion of this post in­cor­rectly in­di­cated that the Jenkins CI tool was a fork. We thank @abayer for the cor­rec­tion.


Read the original on aws.amazon.com »

6 878 shares, 28 trendiness, 0 words and 0 minutes reading time

Signal Status

Signal is up and run­ning.


Read the original on status.signal.org »

7 860 shares, 33 trendiness, 147 words and 2 minutes reading time

I no longer trust The Great Suspender

I know a num­ber of folks use The Great Suspender to au­to­mat­i­cally sus­pend in­ac­tive browser tabs in Chrome. Apparently re­cent ver­sions of this ex­ten­sion have been taken over by a shady anony­mous en­tity and is now

flagged by Microsoft as mal­ware. Notably the most re­cent ver­sion of the ex­ten­sion (v7.1.8) has added in­te­grated an­a­lyt­ics that can track all of your brows­ing ac­tiv­ity across all sites. Yikes.

Recommendations for users of The Great Suspender (7.1.8):

* Disable an­a­lyt­ics track­ing by open­ing the ex­ten­sion op­tions for

The Great Suspender and check­ing the box

Automatic de­ac­ti­va­tion of any kind of track­ing”.

* Pray that the shady de­vel­oper does­n’t is­sue a ma­li­cious up­date to The Great Suspender later.

(There’s no sen­si­ble way to dis­able up­dates of an in­di­vid­ual ex­ten­sion.)

* Close as many un­needed tabs as you can.

* Download the lat­est good ver­sion of The Great Suspender (7.1.6) from GitHub,

and move it to some per­ma­nent lo­ca­tion out­side your Downloads folder.

(It should be com­mit 9730c09.)

* Load your down­loaded copy as an un­packed ex­ten­sion.

(This copy will not auto-up­date to fu­ture un­trusted ver­sions of the ex­ten­sion.)

Caveat: My un­der­stand­ing is that in­stalling an un­packed ex­ten­sion in this way will cause Chrome to is­sue a new kind of se­cu­rity prompt every time it is launched, which you’ll have to ig­nore. 😕

Other browser ex­ten­sions for sus­pend­ing tabs ex­ist, as men­tioned in the

Hacker New dis­cus­sion for this ar­ti­cle. However I have not con­ducted my own se­cu­rity re­view on any of those other ex­ten­sions, so buyer be­ware.


Read the original on dafoster.net »

8 797 shares, 30 trendiness, 2113 words and 15 minutes reading time

I wasted $40k on a fantastic startup idea

You have a mind-shat­ter­ing headache. You’re stand­ing in the aisle of your lo­cal CVS, mas­sag­ing your tem­ples while scan­ning the shelves for some­thing—any­thing—to make make the pain stop.

What do you reach for? Tylenol? Advil? Aleve?

Most peo­ple, I imag­ine, grab what­ev­er’s cheap­est, or clos­est, or what­ever they al­ways use. But if you’re scrupu­lous enough to ask Google for the best painkiller, here’s how your friendly neigh­bor­hood tech be­he­moth would an­swer:

Oh thanks Google that’s just all of them.

If you’re among the 77% of Americans that Google their health prob­lems, in­sipid an­swers like this won’t sur­prise you. But we should be sur­prised, be­cause re­searchers carry out tens of thou­sands of clin­i­cal tri­als every year. And hun­dreds of clin­i­cal tri­als have ex­am­ined the ef­fec­tive­ness of painkillers. So why can’t I Google those re­sults?

And so in the year of our lord 2017 I had a Brilliant Startup Idea: use a struc­tured data­base of clin­i­cal tri­als to pro­vide sim­ple, prac­ti­cal an­swers to com­mon med­ical ques­tions.

As a proof-of-con­cept I tried this by hand: I made a spread­sheet with every OTC painkiller trial I could find and used R to run a net­work meta-analy­sis, the gold stan­dard of ev­i­dence-based med­i­cine.

The re­sults were pretty in­ter­est­ing, and ex­actly the kind of thing I was look­ing for back in the sad ster­ile aisles of CVS:

A wave of ex­hil­i­ra­tion washed over me. Here was a prob­lem that

A per­fect bulls­eye. After a few hours search­ing do­mains I came up with a name for my pro­ject: GlacierMD.

Over the next nine months I would quit my job, write over 200,000 lines of code, hire five con­trac­tors, cre­ate a Delaware C-Corp, add four doc­tors to my ad­vi­sory board, and demo GlacierMD for twelve Bay Area med­ical prac­tices. I would spend $40K of my own sav­ings buy­ing clin­i­cal tri­als and pay­ing con­trac­tors to en­ter said tri­als into the GlacierMD data­base.

On July 2, 2018, GlacierMD pow­ered the world’s largest de­pres­sion meta-analy­sis, us­ing data from 846 tri­als, beat­ing Cipriani’s pre­vi­ous record of 522.

Choirs of an­gels sang in my ears. Here I was, liv­ing the Silicon Valley dream: mak­ing the world a bet­ter place through tech­nol­ogy.

Two weeks later GlacierMD was dead.

That’s an awe­some idea,” said Carl. It sounds like some­thing worth work­ing on.”

Carl was my boss. We worked at a startup that lever­aged au­tonomous blockchains to trans­fer money from naïve in­vestors to slightly less naïve twenty-some­things. There are worse gigs.

And here was Carl telling me that my startup idea would bring such ben­e­fit to hu­man­ity that I sim­ply had to quit, his roadmap be damned. I nod­ded know­ingly, feel­ing the weight of this re­spon­si­bil­ity rest­ing on my proud shoul­ders.

Thanks Carl,” I said. I’ll try to men­tion you when I ac­cept my Nobel.”

I quit two weeks later and started cod­ing at a blis­ter­ing pace. I drew all sorts of in­scrutable di­a­grams with dry-erase pens on my par­ents’ win­dows. I hired a mot­ley crew of Egyptian con­trac­tors to start en­ter­ing clin­i­cal tri­als into my data­base. I com­mis­sioned a logo, reg­is­tered my do­main, and started ob­sess­ing over color schemes.

When I fi­nally fin­ished the MVP I showed it to the head of prod­uct at the com­pany I’d just left. I watched him as he watched my demo, wait­ing for his eyes to melt with the glory of it all. Instead he just sorta shrugged.

Lots of peo­ple make med­ical claims on the in­ter­net,” he said. Why should I trust yours?”

I started bab­bling about net­work meta-analy­ses, sta­tis­ti­cal power, and p-val­ues, but he cut me off.

Yeah okay that’s great but no­body cares about this math crap. You need doc­tors.”

Goddamnit he was right. If no­body could be both­ered with the math, then I was no bet­ter than Gwyneth Paltrow hawk­ing vagina eggs. To build trust I needed to get en­dorse­ments from trust­wor­thy peo­ple.

So I called up some friends, some bud­dies, some friends-of-friends. Would you like to be an ad­vi­sor for my cut­ting-edge health-tech startup?” I’d ask, while eat­ing Dominos in my par­ents’ laun­dry room. I’d give them 1% of this ex­ter­mely valu­able, high-growth startup and in ex­change I could plas­ter their faces all over my web­site.

Four of these doc­tors agreed. This is called mak­ing deals ladies and gen­tle­men and I was like the lovechild of Warren Buffet and Dr. Oz.

Things are go­ing great. My friends and fam­ily all tell me they love the site. Even some strangers on the in­ter­net love it. I know right,” I tell them. So how much would you pay for this?”

Hahahahahahah,” they say in uni­son. Good one!”

I for­got that the the first law of con­sumer tech is no­body pays for con­sumer tech. But no prob­lemo, I say to my­self. This is why Eric Schmidt in­vented ads. I’ll just plas­ter a few ban­ners on GlacierMD and bing bang boom I’ll be seast­eading with Peter Thiel be­fore Burning Man.

But then I look at WebMD’s 10-Qs and start to spi­ral. Turns out the world’s biggest health web­site makes about $0.50/year per user. That is…not enough money to boot­strap GlacierMD. I’m pour­ing money into my rent, into my Egyptian con­trac­tors, into AWS—I need some cash soon.

What I need are peo­ple will­ing to pay for this thing. What about doc­tors? Doctors have money, right? Maybe doc­tors, or prac­tices, or what­ever—some­one in the med­ical in­dus­try—maybe they would shell out some cash for my on-de­mand meta-analy­ses.

So I lis­tened to a few pod­casts and be­came a sales ex­pert. I started cold call­ing peo­ple us­ing scripts from the in­ter­net and tried to con­vince them to sit through a GlacierMD demo.

In the mean­time I re­ceive some wor­ry­ing mes­sages from my Egyptian con­trac­tors.

I think it’s time to talk about a raise,” one of them says.

I feel that I have be­come ex­cep­tional at my job,” says an­other. Please con­sider a raise or I will stop work­ing.”

Please in­crease my pay,” says the third, in­clud­ing help­ful screen­shots demon­strat­ing how to give said raise through the Upwork web­site.

Are my con­trac­tors union­iz­ing? I won­der. I glance obliquely at my shrink­ing bank ac­count state­ment, grit my teeth, and ap­prove the raises. At this rate I’ll hit zero in a mat­ter of weeks.

But my sales calls start pay­ing off. Miraculously I find some doc­tors that are will­ing to talk to me. So I bor­row my par­ents’ car and drive out to the burbs to meet a doc­tor I’ll call Susan.

Susan has a small prac­tice in down­town Redwood City, a Silicon Valley town that looks 3-D printed from the Google Image re­sults for main street.

Susan is a bit chatty (she’s a psy­chi­a­trist) but even­tu­ally I demo GlacierMD. I show her how you can fil­ter stud­ies based on the de­mo­graphic data of the pa­tient, how you can get treat­ment rec­om­men­da­tions based on a pre­ferred side ef­fect pro­file, how you can gen­er­ate a dose-re­sponse curve. She oohs and aahs at all the right points. By the end of the in­ter­view she’s prac­ti­cally drool­ing.

Hook, line, and sinker I think to my­self. I’m al­ready con­tem­plat­ing what color Away bags would look best in the back of my Cybertruck when Susan in­ter­rupts my train of thought.

What a fun pro­ject!” she says en­thu­si­as­ti­cally.

Something in her tone makes me pause. Uh, yeah,” I say. So what would you imag­ine a prod­uct like this—one that could change the very prac­tice of med­i­cine—how much would you pay for such a ser­vice?”

Oh, uh—hm­mmm,” she said. I don’t know if we can spare the bud­get here, to be hon­est. It’s very fun…but I’m not sure if our prac­tice can jus­tify this cost.”

If you read enough sales books most of them tell you that when peo­ple say your prod­uct is too ex­pen­sive what they re­ally mean is your prod­uct is­n’t valu­able enough. Susan acted like I was of­fer­ing her Nirvana as a Service so the con­ver­sa­tion has taken quite a wild turn.

So you don’t think this prod­uct is use­ful?”

Oh sure! I mean, I think in many cases I’ll just pre­scribe what I nor­mally do, since I’m com­fort­able with it. But you know it’s pos­si­ble that some­times I’ll pre­scribe some­thing dif­fer­ent, based on your metas­tud­ies.”

And that is­n’t worth some­thing? Prescribing bet­ter treat­ments?”

Hmmmm,” she said, pick­ing at her fin­ger­nails. Not di­rectly. Of course I al­ways have the best in­ter­ests of my pa­tients in mind, but, you know, it’s not like they’ll pay more if I pre­scribe Lexapro in­stead of Zoloft. They won’t come back more of­ten or re­fer more friends. So I’d sorta just be, like, do­nat­ing this money if I paid you for this thing, right?”

I had lit­er­ally noth­ing to say to that. It had been a bit of a work­ing as­sump­tion of mine over the past few weeks that if you could im­prove the health of the pa­tients then, you know, the doc­tors or the hos­pi­tals or what­ever would pay for that. There was this gi­ant thing called health­care right, and its main pur­pose is im­prov­ing health—tril­lions of dol­lars are spent try­ing to do this. So if I built a thing that im­proves health some­one should pay me, right?

I said good­bye to Susan and tried to cheer my­self up. I had ten more meet­ings with doc­tors all over the Bay Area—surely not all of them were ruth­less cap­i­tal­ists like Susan. Maybe they would see the the tow­er­ing ge­nius of GlacierMD and shell out some cash.

But in fact every­one gave me some ver­sion of Susan’s an­swer. We just can’t jus­tify the cost,” a pe­di­a­tri­cian told me. I’m not sure it’s in the bud­get,” said a pri­mary care physi­cian. It’s awe­some,” said a hos­pi­tal­ist. You should try to sell this!” Ugh.

So in July 2018, nine months and $40K af­ter start­ing GlacierMD, I shut it down. I fired my con­trac­tors, archived the data­base, and shut down the servers. GlacierMD was dead.

Make some­thing peo­ple want. It’s Y-Combinator’s motto and a maxim of as­pir­ing in­ter­net en­tre­pre­neurs. The idea is that if you build some­thing truly awe­some, you’ll fig­ure out a way to make some money off of it.

So I built some­thing peo­ple wanted. Consumers wanted it, doc­tors wanted it, I wanted it. Where did I go wrong?

Occassionally I like to dis­con­nect from the IV drip of in­ter­net pseudo­knowl­edge and learn stuff from books. I know, it’s weird—maybe even a bit hip­ster. But re­cently I read Wharton’s in­tro­duc­tory mar­ket­ing text­book, Strategic Marketing Management. The very first chap­ter has this to say:

To suc­ceed, an of­fer­ing must cre­ate value for all en­ti­ties in­volved in the ex­change—tar­get cus­tomers, the com­pany, and its col­lab­o­ra­tors.”

All stake­hold­ers. You can’t just cre­ate value for the user: that’s a char­ity. You also can’t just cre­ate value for your com­pany: that’s a scam. Your goal is to set up some kind of pos­i­tive-sum ex­change, where every­one ben­e­fits, in­clud­ing you. A busi­ness plan, ac­cord­ing to this text­book, starts with this sim­ple ques­tion: how will you cre­ate value for your­self and the com­pany?

I winced au­di­bly when I read this. How much time I could’ve saved! If I’d ar­tic­u­lated at the be­gin­ning how I ex­pected to ex­tract value from GlacierMD, maybe I would’ve re­searched the eco­nom­ics of an ad-based model, or I would’ve val­i­dated that doc­tors were will­ing to pay, or hos­pi­tals, or in­sur­ance com­pa­nies.

A few months af­ter shut­ter­ing GlacierMD and re­turn­ing to cor­po­rate life my buddy pitched me a new startup idea.

It’s called Doppelganger,” he said. It’s su­per sim­ple—you up­load a selfie to the data­base, and then it uses AI or whatver to in­stantly find every­one in the data­base who—”

Looks like you,” I fin­ished for him.

Exactly,” he said, grin­ning ear to ear. How awe­some would that be? You should build it!”

I mean, I dunno, it sounds like some­thing fun to do at par­ties. In a nar­row sense, it’s some­thing I want, but there’s no way in hell I’m go­ing to de­vote any time to this. Doppelganger has cre­ated value for the cus­tomer but not for the com­pany.

Call me when you have a busi­ness plan,” I said, lac­ing up my Allbirds and rid­ing my Lime scooter into the sun­set.

Want more like this?

We’ll send new posts from TJCX di­rectly to your in­box.


Read the original on tjcx.me »

9 754 shares, 29 trendiness, 877 words and 8 minutes reading time

Software effort estimation is mostly fake research

Effort es­ti­ma­tion is an im­por­tant com­po­nent of any pro­ject, soft­ware or oth­er­wise. While ef­fort es­ti­ma­tion is some­thing that every­body in in­dus­try is in­volved with on a reg­u­lar ba­sis, it is a niche topic in soft­ware en­gi­neer­ing re­search. The prob­lem is re­searcher at­ti­tude (e.g., they are un­will­ing to ven­ture into the wilds of in­dus­try), which has stopped them ac­quir­ing the es­ti­ma­tion data needed to build re­al­is­tic mod­els. A few in­tre­pid peo­ple have risked an as­sault on their ego and talked to peo­ple in in­dus­try, the out­come has been, un­til very re­cently, a small col­lec­tion of tiny es­ti­ma­tion datasets.

In a re­search con­text the term ef­fort es­ti­ma­tion is ac­tu­ally a hang over from the 1970s; ef­fort cor­rec­tion more ac­cu­rately de­scribes the be­hav­ior of most mod­els since the 1990s. In the 1970s mod­els took var­i­ous quan­ti­ties (e.g., es­ti­mated lines of code) and cal­cu­lated an ef­fort es­ti­mate. Later mod­els have in­cluded an es­ti­mate as in­put to the model, pro­duc­ing a cor­rected es­ti­mate as out­put. For the sake of ap­pear­ances I will use ex­ist­ing ter­mi­nol­ogy.

Which ef­fort es­ti­ma­tion datasets do re­searchers tend to use?

A 2012 re­view of datasets used for ef­fort es­ti­ma­tion us­ing ma­chine learn­ing be­tween 1991-2010, found that the top three were: Desharnias with 24 pa­pers (29%), COCOMO with 19 pa­pers (23%) and ISBSG with 17. A 2019 re­view of datasets used for ef­fort es­ti­ma­tion us­ing ma­chine learn­ing be­tween 1991 and 2017, found the top three to be NASA with 17 pa­pers (23%), the COCOMO data and ISBSG were joint sec­ond with 16 pa­pers (21%), and Desharnais was third with 14. The 2012 re­view in­cluded more sources in its search than the 2019 re­view, and sub­jec­tively your au­thor has no­ticed a greater use of the NASA dataset over the last five years or so.

How large are these datasets that have at­tracted so many re­search pa­pers?

The NASA dataset con­tains 93 rows (that is not a typo, there is no power-of-ten miss­ing), COCOMO 63 rows, Desharnais 81 rows, and ISBSG is li­censed by the International Software Benchmarking Standards Group (academics can ap­ply for a lim­ited time use for re­search pur­poses, i.e., not pay the $3,000 an­nual sub­scrip­tion). The China dataset con­tains 499 rows, and is some­times used (there is no men­tion of a su­per­com­puter be­ing re­quired for this amount of data ;-).

Why are re­searchers in­volved in soft­ware ef­fort es­ti­ma­tion feed­ing tiny datasets from the 1990s into ma­chine learn­ing al­go­rithms?

Grant money. Research pro­jects are more likely to be funded if they use a trendy tech­nique, and for the last decade ma­chine learn­ing has been the trendi­est tech­nique in soft­ware en­gi­neer­ing re­search. What data is avail­able to learn from? Those es­ti­ma­tion datasets that were flogged to death in the 1990s us­ing non-ma­chine learn­ing tech­niques, e.g., re­gres­sion.

Use of ma­chine learn­ing also has the ad­van­tage of not need­ing to know any­thing about the de­tails of es­ti­mat­ing soft­ware ef­fort. Everything can be re­duced to a dis­cus­sion of the ma­chine learn­ing al­go­rithms, with per­for­mance judged by a cho­sen er­ror met­ric. Nobody ac­tu­ally looks at the pre­dicted es­ti­mates to dis­cover that the mod­els are es­sen­tially pro­duc­ing the same an­swer, e.g., one learner pre­dicts 43 months, 2 weeks, 4 days, 6 hours, 47 min­utes and 11 sec­onds, while a better’ fit­ting one pre­dicts 43 months, 2 weeks, 2 days, 6 hours, 27 min­utes and 51 sec­onds.

How many ways are there to do ma­chine learn­ing on datasets con­tain­ing less than 100 rows?

A pa­per from 2012 eval­u­ated the pos­si­bil­i­ties us­ing 9-learners times 10 data-pre­rocess­ing op­tions (e.g., log trans­form or dis­cretiza­tion) times 7-error es­ti­ma­tion met­rics giv­ing 630 pos­si­ble fi­nal mod­els; they picked the top 10 per­form­ers.

This 2012 study has not stopped re­searchers con­tin­u­ing to twid­dle away on the op­tion’s nobs avail­able to them; any­thing to keep the pa­per mills run­ning.

To quote the au­thors of one re­view pa­per: Unfortunately, we found that very few pa­pers (including most of our own) paid any at­ten­tion at all to prop­er­ties of the data set.”

Agile tech­niques are widely used these days, and datasets from the 1990s are not ap­plic­a­ble. What datasets do re­searchers use to build Agile ef­fort es­ti­ma­tion mod­els?

A 2020 re­view of Agile de­vel­op­ment ef­fort es­ti­ma­tion found 73 pa­pers. The most pop­u­lar data set, con­tain­ing 21 rows, was used by nine pa­pers. Three pa­pers used sim­u­lated data! At least some au­thors were go­ing out and find­ing data, even if it con­tains fewer rows than the NASA dataset.

As re­searchers in busi­ness schools have shown, large datasets can be ob­tained from in­dus­try; ISBSG ac­tively so­lic­its data from in­dus­try and now has data on 9,500+ pro­jects (as far as I can tell a small amount for each pro­ject, but that is still a lot of pro­jects).

Are there any es­ti­mates on Github? Some Open source pro­jects use JIRA, which in­cludes sup­port for mak­ing es­ti­mates. Some story point es­ti­mates can be found on Github, but the ac­tu­als are miss­ing.

A hand­ful of re­searchers have ob­tained and re­leased es­ti­ma­tion datasets con­tain­ing thou­sands of rows, e.g., the SiP dataset con­tains 10,100 rows and the CESAW dataset con­tains over 40,000 rows. These datasets are gen­er­ally ig­nored, per­haps be­cause when pre­sented with lots of real data re­searchers have no idea what to do with it.


Read the original on shape-of-code.coding-guidelines.com »

10 748 shares, 30 trendiness, 280 words and 3 minutes reading time


The win­dows crate lets you call any Windows API past, pre­sent, and fu­ture us­ing code gen­er­ated on the fly di­rectly from the meta­data de­scrib­ing the API and right into your Rust pack­age where you can call them as if they were just an­other Rust mod­ule.

The Rust lan­guage pro­jec­tion fol­lows in the tra­di­tion es­tab­lished by C++/WinRT of build­ing lan­guage pro­jec­tions for Windows us­ing stan­dard lan­guages and com­pil­ers, pro­vid­ing a nat­ural and id­iomatic way for Rust de­vel­op­ers to call Windows APIs.

Start by adding the fol­low­ing to your Cargo.toml file:


win­dows = 0.2.1”


win­dows = 0.2.1”

This will al­low Cargo to down­load, build, and cache Windows sup­port as a pack­age. Next, spec­ify which types you need in­side of a build.rs build script and the win­dows crate will gen­er­ate the nec­es­sary bind­ings:

fn main() {



win­dows::win32::sys­tem_ser­vices::{Cre­ateEventW, SetEvent, WaitForSingleObject}


Finally, make use of any Windows APIs as needed.

mod bind­ings {


use bind­ings::{


win­dows::win32::sys­tem_ser­vices::{Cre­ateEventW, SetEvent, WaitForSingleObject},


fn main() -> win­dows::Re­sult

To re­duce build time, use a bind­ings crate rather sim­ply a mod­ule. This will al­low Cargo to cache the re­sults and build your pro­ject far more quickly.

There is an ex­per­i­men­tal doc­u­men­ta­tion gen­er­a­tor for the Windows API. The doc­u­men­ta­tion is pub­lished here. This can be use­ful to fig­ure out how the var­i­ous Windows APIs map to Rust mod­ules and which use paths you need to use from within the build macro.

For a more com­plete ex­am­ple, take a look at Robert Mikhayelyan’s Minesweeper. More sim­ple ex­am­ples can be found here.


Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.