10 interesting stories served every morning and every evening.




1 701 shares, 21 trendiness, 752 words and 6 minutes reading time

Amazon Let a Fraudster Keep My Sony a7R IV and Refunded Him $2,900

I am an am­a­teur pho­tog­ra­pher, and I’ve sold cam­eras non-pro­fes­sion­ally on Amazon for over eight years as I’ve up­graded. That trend comes to an end with my most re­cent trans­ac­tion. In December, I sold a mint-in-box Sony a7R 4, and the buyer used a com­bi­na­tion of so­cial en­gi­neer­ing and am­bi­gu­ity to not only end up with the cam­era, but also the money he paid me.

Amazon’s A-to-Z Guarantee did not pro­tect me as a seller. Based on my ex­pe­ri­ence with this trans­ac­tion, I can­not in good faith rec­om­mend sell­ing cam­eras on Amazon any­more.

Author’s Note: This is a sum­mary and my per­sonal opin­ion, and not that of my em­ployer or any­one else.

I or­dered a sec­ond Sony a7R 4 as a backup for a pho­to­shoot. My plan was to then re­sell it, as the seller fees were slightly less than the rental fees at the time. I listed it on Amazon, and it was al­most in­stantly pur­chased by a buyer from Florida. I took pho­tos of the cam­era as I pre­pared it for ship­ment, and used con­firmed & in­sured two-day FedEx. The pack­age ar­rived to the buyer on December 17th.

The buyer listed an ini­tial for his last name—that should have been a red flag. It gave him a layer of anonymity that will be rel­e­vant later.

On December 24th, I ap­par­ently ru­ined Christmas, as the buyer now claims that ac­ces­sories were miss­ing. Throughout this whole or­deal, I’ve never heard di­rectly from the buyer, in spite of nu­mer­ous email com­mu­ni­ca­tions. He never told me which product pur­chased/​pack­ag­ing” was miss­ing.

I started a claim with Amazon, show­ing the pho­to­graphic ev­i­dence of a mint-in-box a7R 4 with all ac­ces­sories. I de­nied the re­turn. The buyer of­fered no pho­to­graphic proof or other ev­i­dence that he re­ceived any­thing but a mint cam­era.

To this day, I have no idea what he claimed was missing” from the pack­age. I even in­cluded all the orig­i­nal plas­tic wrap!

After about a week of back-and-forth emails, Amazon ini­tially agreed with me.

Somehow, a sec­ond sup­port ticket for the same item got opened up. The is­sue was not yet re­solved. The buyer kept claw­ing back. The next day, I get an email about a refund re­quest ini­ti­ated.” On this sec­ond ticket, Amazon now turned against me.

Now, we’re in 2020. The buyer ap­par­ently shipped the cam­era back to me; how­ever, he en­tered the wrong ad­dress (forgetting my last name, among other things). The pack­age was re­turned to sender, and I never got to in­spect what’s in­side. Whether that box con­tained the cam­era as sent in like new con­di­tion, a used cam­era, or a box of stones is an eter­nal mys­tery.

Truly, had he shipped it to the right ad­dress, I would have had mul­ti­ple wit­nesses and video footage of the un­box­ing.

Here’s where it gets in­ter­est­ing: as I ap­peal the claim, Amazon notes that the buyer is not re­spon­si­ble for ship­ping the item back to the cor­rect ad­dress, and they can in­deed keep the item if they want to, fol­low­ing the ini­ti­a­tion of an A-to-Z Guarantee.

Indeed, I have a pa­per trail of emails that I in fact sent to Amazon. Somehow, they got their tick­ets con­fused. When I fol­lowed up on this email, they shut off com­mu­ni­ca­tion.

So, as a buyer, you can keep an item with no oblig­a­tion to re­turn,” even if you can’t sub­stan­ti­ate your claim of missing items or box.” Now the buyer has the cam­era, and the cash.

The whole ex­pe­ri­ence has been frus­trat­ing, hu­mil­i­at­ing, and sick­en­ing to my pho­tog­ra­phy hobby. I hope that this serves as a cau­tion­ary tale for sell­ing such goods on Amazon, if my ex­pe­ri­ence is any in­di­ca­tion.

As of now, I’ve emailed the buyer again to ship the cam­era back to me, and I have a case open with Amazon, in which I pro­vide the 23 emails they claim I never sent them. That case was closed with no re­sponse from Amazon. I had an ini­tially-sym­pa­thetic ear through their Twitter sup­port, un­til I men­tioned the specifics of my case.

* If you’re go­ing to sell on Amazon or else­where, take an ac­tual video of you pack­ing the cam­era. You need all the de­fense you can get against items mys­te­ri­ously dis­ap­pear­ing.

* Investigate more even-handed sell­ing ser­vices, like eBay, Fred Miranda, or other on­line re­tail­ers.

* If you need a backup cam­era, go ahead and rent one. I’m a fre­quent cus­tomer of BorrowLenses, and I in­fi­nitely re­gret not us­ing them this time.

* Update your per­sonal ar­ti­cles in­sur­ance pol­icy for any mo­ment that the cam­era is in your pos­ses­sion, and use some­thing like MyGearVault to keep track of all the se­r­ial num­bers. I only had the cam­era for a cou­ple of days al­to­gether, but that was enough.

I hope that this was a worst-case, every­thing-goes-wrong sce­nario, and I hope that it does­n’t hap­pen to any­one else. There ought to be more even-handed fail­safes for these trans­ac­tions.

About the au­thor: Cliff is an am­a­teur land­scape and travel pho­tog­ra­pher. You can find more of his work on his web­site and Instagram ac­count.

...

Read the original on petapixel.com »

2 288 shares, 21 trendiness, 1267 words and 13 minutes reading time

Requirements volatility is the core problem of software engineering

The rea­son we de­velop soft­ware is to meet the needs of some cus­tomer, client, user, or mar­ket. The goal of soft­ware en­gi­neer­ing is to make that de­vel­op­ment pre­dictable and cost-ef­fec­tive.

It’s now been more than 50 years since the first IFIP Conference on Software Engineering, and in that time there have been many dif­fer­ent soft­ware en­gi­neer­ing method­olo­gies, processes, and mod­els pro­posed to help soft­ware de­vel­op­ers achieve that pre­dictable and cost-ef­fec­tive process. But 50 years later, we still seem to see the same kinds of prob­lems we al­ways have: late de­liv­ery, un­sat­is­fac­tory re­sults, and com­plete pro­ject fail­ures.

Take a gov­ern­ment con­tract I worked on years ago. It is un­doubt­edly the most suc­cess­ful pro­ject I’ve ever worked on, at least from the stand­point of the usual pro­ject man­age­ment met­rics: it was com­pleted early, it was com­pleted un­der bud­get, and it com­pleted a sched­uled month-long ac­cep­tance test in three days.

This pro­ject op­er­ated un­der some un­usual con­straints: the con­tract was de­nom­i­nated and paid in a for­eign cur­rency and was ab­solutely firm fixed-price, with no change man­age­ment process in the con­tract at all. In fact, as part of the con­tract, the ac­cep­tance test was laid out as a se­ries of ob­serv­able, do-this and this-fol­lows tests that could be checked off, yes or no, with very lit­tle room for dis­pute. Because of the terms of the con­tract, all the risk of any vari­a­tion in re­quire­ments or in for­eign ex­change rates were on my com­pany.

The process was ab­solutely, firmly, the clas­si­cal wa­ter­fall, and we pro­ceeded through the steps with con­fi­dence, un­til the fi­nal sys­tem was com­pleted, de­liv­ered, and the ac­cep­tance test was, well, ac­cepted.

After which I spend an­other 18 months with the sys­tem, mod­i­fy­ing it un­til it ac­tu­ally sat­is­fied the cus­tomers needs.

In the in­ter­ven­ing year be­tween the con­tract be­ing signed and the soft­ware be­ing de­liv­ered, re­port­ing for­mats had changed, some com­po­nents of the hard­ware plat­form were su­per­seded by new and bet­ter prod­ucts, and reg­u­la­tory changes were made to which the sys­tem must re­spond.

Requirements change. Every soft­ware en­gi­neer­ing pro­ject will face this hard prob­lem at some point.

With this in mind, all soft­ware de­vel­op­ment processes can be seen as dif­fer­ent re­sponses to this es­sen­tial truth. The orig­i­nal (and naive) wa­ter­fall process sim­ply as­sumed that you could start with a firm state­ment of the re­quire­ments to be met.

W. W. Royce is cred­ited with first ob­serv­ing the wa­ter­fall in his pa­per Managing the Development of Large Software Systems,” and the il­lus­tra­tions in hun­dreds of soft­ware en­gi­neer­ing pa­pers, text­books, and ar­ti­cles are rec­og­niz­ably the di­a­grams that he cre­ated. But what’s of­ten for­got­ten in Royce’s orig­i­nal pa­per is that he also says [The] im­ple­men­ta­tion [in the di­a­gram] is risky and in­vites fail­ure.”

Royce’s ob­ser­va­tion—that every de­vel­op­ment goes through rec­og­niz­able stages, from iden­ti­fy­ing the re­quire­ments and pro­posed so­lu­tion, through build­ing the soft­ware, and then test­ing it to see if it sat­is­fies those re­quire­ments—was a good one. In fact, every pro­gram­mer is fa­mil­iar with that, even in their first class­room as­sign­ments. But when your re­quire­ments change over the du­ra­tion of the pro­ject, you’re guar­an­teed that you won’t be able to sat­isfy the cus­tomer even if you com­pletely sat­isfy the orig­i­nal re­quire­ments.

There is re­ally only one an­swer to this: you need to find a way to match the re­quire­ments-de­vel­op­ment-de­liv­ery cy­cle to the rate at which the re­quire­ments change. In the case of my gov­ern­ment pro­ject, we did so ar­ti­fi­cially: there were no changes of any sub­stance, so it was sim­ple to build to the spec­i­fi­ca­tion and ac­cep­tance test.

Royce’s orig­i­nal pa­per ac­tu­ally rec­og­nized the prob­lem of changes dur­ing de­vel­op­ment. His pa­per de­scribes an it­er­a­tive model in which un­ex­pected changes and de­sign de­ci­sions that don’t work out are fed back through the de­vel­op­ment process.

Once we ac­cept the core un­cer­tainty in all soft­ware de­vel­op­ment, that the re­quire­ments never stay the same over time, we can be­gin to do de­vel­op­ment in ways that can cope with the in­evitable changes.

Start by ac­cept­ing that change is in­evitable.

Any pro­ject, no mat­ter how well planned, will re­sult in some­thing that is at least some­what dif­fer­ent than what was first en­vi­sioned. Development processes must ac­cept this and be pre­pared to cope with it.

As a con­se­quence of this, soft­ware is never fin­ished, only aban­doned.

We like to make a spe­cial, crisply-de­fined point at which a de­vel­op­ment pro­ject is finished.” The re­al­ity, how­ever, is that any fixed time at which we say it’s done” is just an ar­ti­fi­cial di­vid­ing line. New fea­tures, re­vised fea­tures, and bug fixes will start to come in the mo­ment the finished” prod­uct is de­liv­ered. (Actually, there will be changes that are still needed, rep­re­sent­ing tech­ni­cal debt and de­ferred re­quire­ments, at the very mo­ment the soft­ware is re­leased.) Those changes will con­tinue as long as the soft­ware prod­uct is be­ing used.

This means that no soft­ware prod­uct is ever ex­actly, per­fectly sat­is­fac­tory. Real soft­ware de­vel­op­ment is like shoot­ing at a mov­ing tar­get—all the var­i­ous ran­dom vari­a­tions of aim, mo­tion of the tar­get, wind, and vi­bra­tion en­sure that while you may be close to the ex­act bulls­eye, you never ever achieve per­fec­tion.

Looked at in this light, soft­ware de­vel­op­ment could seem to be pretty de­press­ing, even dis­mal. It sounds as if we’re say­ing that the whole no­tion of pre­dictable, cost-ef­fec­tive de­vel­op­ment is chas­ing an im­pos­si­ble dream.

It’s not. We can be very ef­fec­tive de­vel­op­ers as long as we keep the re­al­i­ties in mind.

The first re­al­ity is that while per­fec­tion is im­pos­si­ble, prag­matic suc­cess is quite pos­si­ble. The LEAN startup move­ment has made the MVP—”minimum vi­able prod­uct”—the usual goal for star­tups. We need to ex­tend this idea to all de­vel­op­ment, and rec­og­nize that every prod­uct is re­ally an MVP—our best ap­prox­i­ma­tion of a so­lu­tion for cur­rent un­der­stand­ing of the prob­lem.

The sec­ond re­al­ity is that we can’t re­ally stop changes in re­quire­ments, so we need to work with the changes. This has been un­der­stood for a long time in ac­tual soft­ware de­vel­op­ment—Par­nas’s rule for iden­ti­fy­ing mod­ules is to build mod­ules to hide re­quire­ments that can change. At the same time, there have been re­peated at­tempts to de­scribe soft­ware de­vel­op­ment processes that ex­pect to pro­vide suc­ces­sive ap­prox­i­ma­tions—in­cre­men­tal de­vel­op­ment processes (I’ve called it The Once and Future method­ol­ogy“).

Once we ac­cept the ne­ces­sity of in­cre­men­tal de­vel­op­ment, once we free our­selves from the no­tion of com­plet­ing the per­fect so­lu­tion, we can ac­cept changes with some calm con­fi­dence.

The third and fi­nal re­al­ity is that all sched­ules are re­ally time-boxed sched­ules. We go into a de­vel­op­ment pro­ject un­able to say ex­actly what the fi­nal prod­uct will be. Because of that, no early pre­dic­tion of time to com­plete can be ac­cu­rate, and all fi­nal de­liv­er­ies will be par­tial de­liv­er­ies.

The Agile Manifesto grew out of recog­ni­tion of these facts. Regular de­liv­ery of work­ing soft­ware is part of this recog­ni­tion: a truly ag­ile pro­ject has work­ing par­tial im­ple­men­ta­tions on a reg­u­lar ba­sis. Close re­la­tion­ships with the even­tual cus­tomer en­sure that as re­quire­ments changes be­come man­i­fest, they can be fit into the work plan.

In an ag­ile pro­ject, ide­ally, there is a work­ing par­tial im­ple­men­ta­tion start­ing very early in a pro­ject, and ob­serv­able progress is be­ing made to­ward a sat­is­fac­tory prod­uct from the first. Think of the tar­get-shoot­ing metaphor again—as we progress, we’re closer and closer to the cen­ter ring, the bulls­eye. We can be con­fi­dent that, when time is up, the prod­uct will be at least close to the goal.

Tags: ag­ile, bul­letin, process, soft­ware en­gi­neer­ing, stack­over­flow

...

Read the original on stackoverflow.blog »

3 271 shares, 11 trendiness, 1047 words and 10 minutes reading time

FCC forced by court to ask the public (again) if they think tearing up net neutrality was a really good idea or not

Comment The Federal Communications Commission (FCC) is ask­ing the American pub­lic to tell it if its de­ci­sion in 2017 to scrap net neu­tral­ity reg­u­la­tions was dumb or not.

In a strik­ing piece of irony — and one that the FCC is dis­tinctly un­happy about — the watch­dog is legally obliged to seek pub­lic com­ment on three is­sues: how its de­ci­sion has threat­ened pub­lic safety, dam­aged broad­band in­fra­struc­ture roll­out, and pre­vented poor peo­ple from get­ting ac­cess to fast in­ter­net ac­cess.

That oblig­a­tion is the re­sult of a le­gal chal­lenge to the FCCs de­ci­sion to tear up net neu­tral­ity rules cov­er­ing in­ter­net ac­cess in America. That at­tempt last year failed in court, largely be­cause fed­eral reg­u­la­tors are given sig­nif­i­cant lee­way to de­cide their own rules, even when it com­prises over­turn­ing their own rules made just two years ear­lier, in 2015.

However, the court noted some se­ri­ous con­cerns about the FCC scrap­ping its own rules, and so told the reg­u­la­tor it needs to gather pub­lic feed­back on those is­sues and to con­sider what it needs to do to al­le­vi­ate con­cerns. Normally this would­n’t be a prob­lem. It is sim­ply a case of the ju­di­cial process car­ry­ing out its proper func­tion: iden­ti­fy­ing is­sues, and seek­ing to get them rec­ti­fied.

But the net neu­tral­ity is­sue has be­come so ide­o­log­i­cal and par­ti­san — thanks largely to the be­hav­ior of the FCC com­mis­sion­ers who pushed through a pre-de­cided out­come and ac­tively ig­nored pub­lic op­po­si­tion to their plans — that be­ing forced to ask the pub­lic where it screwed up is in it­self em­bar­rass­ing.

It is a vir­tual cer­tainty that net neu­tral­ity ad­vo­cates will glee­fully take the op­por­tu­nity to rail against the FCC, in just one more bat­tle of words over the safe­guards.

In a re­minder of just how petty fed­eral tele­coms reg­u­la­tion has be­come, the FCC can’t even take this im­plicit re­buke pro­fes­sion­ally. And so it at­tempted to hide the re­al­ity of the sit­u­a­tion by flood­ing its an­nounce­ments web­site on Wednesday with sud­denly im­por­tant news and de­scrib­ing the pub­lic com­ment pe­riod in the most ob­scure terms pos­si­ble.

That’s why, this week, we were treated to a string of PR spin and quotes about how the FCC is do­ing a great job by open­ing up spec­trum. What They’re Saying About Chairman Pai’s C-Band Plan,” reads one an­nounce­ment that fea­tures noth­ing but quotes from peo­ple like Vice-President Mike Pence and Former Chairman of the House Permanent Select Committee on Intelligence Mike Rogers.”

The next an­nounce­ment cov­ers the ex­tremely im­por­tant news that the FCC is clos­ing an ap­pli­ca­tion to re­new sev­eral ra­dio sta­tions.

What else does the FCC have for us? Well, the vi­tal fact it has de­cided on the mem­ber­ship of the Advisory Committee on Diversity and Digital Empowerment. That also gets its own press re­lease and of­fi­cial an­nounce­ment.

Anything else? Yep. Here’s an en­tire re­lease talk­ing about how one FCC com­mis­sioner applauds 5G work­force de­vel­op­ment grant.” We’re se­ri­ous. Here we have Brendan Carr wax­ing lyri­cal about how he’s thrilled that [Dept of Labor] has rec­og­nized the crit­i­cal role of tower techs, line­men, and other 5G work­ers in build­ing our coun­try’s in­for­ma­tion in­fra­struc­ture.”

As for the only piece of real news this week — the pub­lic com­ment pe­riod on pub­lic safety sparked by the net neu­tral­ity de­ci­sion — you could be for­given for miss­ing it al­to­gether. While all the other head­lines are ex­tremely clear, this one is con­fus­ingly ti­tled: WCB Seeks Comment on Discrete Issues Arising from Mozilla Decision.” Not even a men­tion of net neu­tral­ity.

The ex­plana­tory doc­u­ment is not much bet­ter. It is ti­tled: Wireline Competition Bureau Seeks to Refresh Record in Restoring Internet Freedom and Lifeline pro­ceed­ings in light of the DC Circuit’s Mozilla de­ci­sion.” (This is a ref­er­ence to Firefox maker Mozilla’s le­gal cam­paign against the FCC.)

It’s hard to imag­ine a more ob­tuse ex­pla­na­tion. Anyway, dig hard enough and the de­tails are all in there: is there a risk to pub­lic safety com­mu­ni­ca­tions due to packet pri­or­i­ti­za­tion? Would harm­ful con­duct have been pro­hib­ited un­der the rules that were in place but were scrapped? Are there other ways to deal with po­ten­tial pub­lic safety con­cerns?

The FCC has clumped more con­vo­luted ver­sions of these ques­tions to­gether in long para­graphs, rather than break them out into clear and num­bered ques­tions as it fre­quently does when it is do­ing its job prop­erly.

There are other ques­tions about pole at­tach­ments — which sounds dull but is crit­i­cally im­por­tant as one case cur­rently in front of the Ninth Circuit makes clear. And on the Lifeline pro­gram that sub­si­dizes broad­band to low-in­come fam­i­lies.

Now we’d love to tell you, Reg read­ers that we spot­ted this im­por­tant pub­lic com­ment pe­riod de­spite the FCCs best ef­forts to hide it be­cause we are so at­tuned to tele­coms pol­icy and the FCC that it im­me­di­ately threw up red flags.

But the truth is that we only no­ticed thanks to FCC Commissioner Jessica Rosenworcel who re­mains the voice of san­ity at the fed­eral reg­u­la­tor. She saw what her col­leagues were try­ing to do, and so put out her own re­lease, which all com­mis­sion­ers are en­ti­tled to do.

That re­lease was ti­tled Rosenworcel On FCC Seeking Public Comment On Net Neutrality Remand.” Which was pretty clear. In her re­lease, Rosenworcel speaks plainly.

The FCC got it wrong when it re­pealed net neu­tral­ity. The de­ci­sion put the agency on the wrong side of his­tory, the American pub­lic, and the law. And the courts agreed. That’s why they sent back to this agency key pieces re­gard­ing how the roll­back of net neu­tral­ity pro­tec­tions im­pacted pub­lic safety, low in­come Americans, and broad­band in­fra­struc­ture,” she neatly sum­ma­rizes.

She goes on: Today, the FCC is seek­ing com­ment on how best to move for­ward. My ad­vice? The American pub­lic should raise their voices and let Washington know how im­por­tant an open in­ter­net is for every piece of our civic and com­mer­cial lives. The agency wrong­fully gave broad­band providers the power to block web­sites, throt­tle ser­vices, and cen­sor on­line con­tent. The fight for an open in­ter­net is not over. It’s time to make noise.”

To file a pub­lic com­ment, see the ob­fus­cated info here [PDF] and sub­mit your thoughts here, quot­ing pro­ceed­ings 17-108 for net neu­tral­ity aka restor­ing in­ter­net free­dom, by the end of March. ®

...

Read the original on www.theregister.co.uk »

4 268 shares, 12 trendiness, 216 words and 2 minutes reading time

The Bash Hackers Wiki [Bash Hackers Wiki]

This wiki is in­tended to hold doc­u­men­ta­tion of any kind about GNU Bash. The main mo­ti­va­tion was to pro­vide hu­man-read­able doc­u­men­ta­tion and in­for­ma­tion so users aren’t forced to read every bit of the Bash man­page - which can be dif­fi­cult to un­der­stand. However, the docs here are not meant as a new­bie tu­to­r­ial.

This wiki and any pro­grams found in this wiki are free soft­ware: you can re­dis­trib­ute it and/​or mod­ify it un­der the terms of the GNU General Public License as pub­lished by the Free Software Foundation, ei­ther ver­sion 3 of the License, or (at your op­tion) any later ver­sion.

This wiki and its pro­grams are dis­trib­uted in the hope that it will be use­ful, but WITHOUT ANY WARRANTY; with­out even the im­plied war­ranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more de­tails.

You should have re­ceived a copy of the GNU General Public License along with this pro­gram. If not, see <http://​www.gnu.org/​li­censes/>.

What would YOU like to see here? (outdated and locked, please use the dis­cus­sions)

Stranger! Feel free to reg­is­ter and com­ment or edit the con­tents. There is a Bash Hackers Wiki needs love page that lists some things to do. The reg­is­tra­tion is only there to pre­vent SPAM.

...

Read the original on wiki.bash-hackers.org »

5 265 shares, 26 trendiness, 580 words and 6 minutes reading time

Twitter is suspending 70 pro-Bloomberg accounts, citing 'platform manipulation'

Michael R. Bloomberg’s pres­i­den­tial cam­paign has been ex­per­i­ment­ing with novel tac­tics to cul­ti­vate an on­line fol­low­ing, or at least the ap­pear­ance of one.

But one of the strate­gies — de­ploy­ing a large num­ber of Twitter ac­counts to push out iden­ti­cal mes­sages — has back­fired. On Friday, Twitter be­gan sus­pend­ing 70 ac­counts post­ing pro-Bloomberg con­tent in a pat­tern that vi­o­lates com­pany rules.

We have taken en­force­ment ac­tion on a group of ac­counts for vi­o­lat­ing our rules against plat­form ma­nip­u­la­tion and spam,” a Twitter spokesman said. Some of the sus­pen­sions will be per­ma­nent, while in other cases ac­count own­ers will have to ver­ify they have con­trol of their ac­counts.

As part of a far-reach­ing so­cial me­dia strat­egy, the Bloomberg cam­paign has hired hun­dreds of tem­po­rary em­ploy­ees to pump out cam­paign mes­sages through Facebook, Twitter and Instagram. These deputy field or­ga­niz­ers” re­ceive $2,500 per month to pro­mote the for­mer New York may­or’s can­di­dacy within their per­sonal so­cial cir­cles, in ad­di­tion to other, more con­ven­tional du­ties. They re­ceive cam­paign-ap­proved lan­guage that they can opt to post.

In posts re­viewed by The Times, or­ga­niz­ers of­ten used iden­ti­cal text, im­ages, links and hash­tags. Many ac­counts used were cre­ated only in the last two months. Bloomberg of­fi­cially en­tered the pres­i­den­tial race on Nov. 24.

After The Times in­quired about this pat­tern, Twitter de­ter­mined it ran afoul of its Platform Manipulation and Spam Policy.” Laid out in September 2019 in re­sponse to the ac­tiv­i­ties of Russian-sponsored troll net­works in the 2016 pres­i­den­tial elec­tion, the pol­icy pro­hibits prac­tices such as ar­ti­fi­cially boost­ing en­gage­ment on tweets and us­ing de­lib­er­ately mis­lead­ing pro­file in­for­ma­tion.

By spon­sor­ing hun­dreds of new ac­counts that post copy-pasted con­tent, Twitter said the cam­paign vi­o­lated its rules against creating mul­ti­ple ac­counts to post du­plica­tive con­tent,” posting iden­ti­cal or sub­stan­tially sim­i­lar Tweets or hash­tags from mul­ti­ple ac­counts you op­er­ate” and coordinating with or com­pen­sat­ing oth­ers to en­gage in ar­ti­fi­cial en­gage­ment or am­pli­fi­ca­tion, even if the peo­ple in­volved use only one ac­count.”

The sus­pen­sions may sweep up ac­counts be­long­ing to un­paid Bloomberg sup­port­ers or cam­paign vol­un­teers. While the Bloomberg’s cam­paign’s prac­tice of pay­ing Twitter users was a fac­tor in the sus­pen­sions, a com­pany spokesman said ac­counts be­hav­ing in sub­stan­tially the same man­ner will re­ceive the same treat­ment, re­gard­less of who con­trols them.

In a state­ment, Sabrina Singh, a spokesper­son for the Bloomberg cam­paign, said: We ask that all of our deputy field or­ga­niz­ers iden­tify them­selves as work­ing on be­half of the Mike Bloomberg 2020 cam­paign on their so­cial me­dia ac­counts. Through Outvote [a voter-en­gage­ment app], con­tent is shared by staffers and vol­un­teers to their net­work of friends and fam­ily and was not in­tended to mis­lead any­one.”

Facebook’s re­sponse to the Bloomberg cam­paign’s novel so­cial strat­egy has also been evolv­ing. The so­cial net­work views the cam­paign’s ac­tiv­ity as falling un­der its rules for branded con­tent, not the rules against coordinated in­au­then­tic be­hav­ior” de­vised largely in re­sponse to Russian elec­tion med­dling.

Facebook’s rules for branded con­tent require dis­clo­sure of paid part­ner­ships any­time there has been an ex­change of value be­tween a cre­ator or pub­lisher and a busi­ness part­ner.” In 2018, the com­pany be­gan to re­quire more de­tailed dis­clo­sure for po­lit­i­cal ads to dis­cour­age state-spon­sored in­flu­ence op­er­a­tions.

The soft­ware tool cre­ated for buy­ing po­lit­i­cal ads on Facebook did not al­low for branded con­tent cam­paigns by in­flu­encers. Earlier this month, af­ter the Bloomberg cam­paign by­passed the tool en­tirely to mount a large-scale paid in­flu­encer cam­paign, Facebook lifted that ban.

...

Read the original on www.latimes.com »

6 241 shares, 33 trendiness, 1849 words and 14 minutes reading time

Learning Rust With Entirely Too Many Linked Lists

Got any is­sues or want to check out all the fi­nal code at once?

Everything’s on Github!

NOTE: The cur­rent edi­tion of this book is writ­ten against Rust 2018, which was first re­leased with rustc 1.31 (Dec 8, 2018). If your rust tool­chain is new enough, the Cargo.toml file that cargo new cre­ates should con­tain the line edi­tion = 2018” (or if you’re read­ing this in the far fu­ture, per­haps some even larger num­ber!). Using an older tool­chain is pos­si­ble, but un­locks a se­cret hard­mode, where you get ex­tra com­piler er­rors that go com­pletely un­men­tioned in the text of this book. Wow, sounds like fun!

I fairly fre­quently get asked how to im­ple­ment a linked list in Rust. The an­swer hon­estly de­pends on what your re­quire­ments are, and it’s ob­vi­ously not su­per easy to an­swer the ques­tion on the spot. As such I’ve de­cided to write this book to com­pre­hen­sively an­swer the ques­tion once and for all.

In this se­ries I will teach you ba­sic and ad­vanced Rust pro­gram­ming en­tirely by hav­ing you im­ple­ment 6 linked lists. In do­ing so, you should learn:

* All The Keywords: struct, enum, fn, pub, impl, use, …

Yes, linked lists are so truly aw­ful that you deal with all of these con­cepts in mak­ing them real.

Everything’s in the side­bar (may be col­lapsed on mo­bile), but for quick ref­er­ence, here’s what we’re go­ing to be mak­ing:

Just so we’re all the same page, I’ll be writ­ing out all the com­mands that I feed into my ter­mi­nal. I’ll also be us­ing Rust’s stan­dard pack­age man­ager, Cargo, to de­velop the pro­ject. Cargo is­n’t nec­es­sary to write a Rust pro­gram, but it’s

so much bet­ter than us­ing rustc di­rectly. If you just want to futz around you can also run some sim­ple pro­grams in the browser via play.rust-lang.org.

Let’s get started and make our pro­ject:

> cargo new –lib lists

> cd lists

We’ll put each list in a sep­a­rate file so that we don’t lose any of our work.

It should be noted that the au­then­tic Rust learn­ing ex­pe­ri­ence in­volves writ­ing code, hav­ing the com­piler scream at you, and try­ing to fig­ure out what the heck that means. I will be care­fully en­sur­ing that this oc­curs as fre­quently as pos­si­ble. Learning to read and un­der­stand Rust’s gen­er­ally ex­cel­lent com­piler er­rors and doc­u­men­ta­tion is in­cred­i­bly im­por­tant to be­ing a pro­duc­tive Rust pro­gram­mer.

Although ac­tu­ally that’s a lie. In writ­ing this I en­coun­tered way more com­piler er­rors than I show. In par­tic­u­lar, in the later chap­ters I won’t be show­ing a lot of the ran­dom I typed (copy-pasted) bad” er­rors that you ex­pect to en­counter in every lan­guage. This is a guided tour of hav­ing the com­piler scream at us.

We’re go­ing to be go­ing pretty slow, and I’m hon­estly not go­ing to be very se­ri­ous pretty much the en­tire time. I think pro­gram­ming should be fun, dang it! If you’re the type of per­son who wants max­i­mally in­for­ma­tion-dense, se­ri­ous, and for­mal con­tent, this book is not for you. Nothing I will ever make is for you. You are wrong.

Just so we’re to­tally 100% clear: I hate linked lists. With a pas­sion. Linked lists are ter­ri­ble data struc­tures. Now of course there’s sev­eral great use cases for a linked list:

* You want to do a lot of split­ting or merg­ing of big lists. A lot.

* You’re writ­ing a ker­nel/​em­bed­ded thing and want to use an in­tru­sive list.

* You’re us­ing a pure func­tional lan­guage and the lim­ited se­man­tics and ab­sence

of mu­ta­tion makes linked lists eas­ier to work with.

But all of these cases are su­per rare for any­one writ­ing a Rust pro­gram. 99% of the time you should just use a Vec (array stack), and 99% of the other 1% of the time you should be us­ing a VecDeque (array deque). These are bla­tantly su­pe­rior data struc­tures for most work­loads due to less fre­quent al­lo­ca­tion, lower mem­ory over­head, true ran­dom ac­cess, and cache lo­cal­ity.

Linked lists are as niche and vague of a data struc­ture as a trie. Few would balk at me claim­ing a trie is a niche struc­ture that your av­er­age pro­gram­mer could hap­pily never learn in an en­tire pro­duc­tive ca­reer — and yet linked lists have some bizarre celebrity sta­tus. We teach every un­der­grad how to write a linked list. It’s the only niche col­lec­tion

I could­n’t kill from std::col­lec­tions. It’s

the list in C++!

We should all as a com­mu­nity say no to linked lists as a standard” data struc­ture. It’s a fine data struc­ture with sev­eral great use cases, but those use cases are ex­cep­tional, not com­mon.

Several peo­ple ap­par­ently read the first para­graph of this PSA and then stop read­ing. Like, lit­er­ally they’ll try to re­but my ar­gu­ment by list­ing one of the things in my list of great use cases. The thing right af­ter the first para­graph!

Just so I can link di­rectly to a de­tailed ar­gu­ment, here are sev­eral at­tempts at counter-ar­gu­ments I have seen, and my re­sponse to them. Feel free to skip to the first chap­ter if you just want to learn some Rust!

Yes! Maybe your ap­pli­ca­tion is I/O-bound or the code in ques­tion is in some cold case that just does­n’t mat­ter. But this is­n’t even an ar­gu­ment for us­ing a linked list. This is an ar­gu­ment for us­ing what­ever at all. Why set­tle for a linked list? Use a linked hash map!

If per­for­mance does­n’t mat­ter, then it’s surely fine to ap­ply the nat­ural de­fault of an ar­ray.

Yep! Although as Bjarne Stroustrup notes this does­n’t ac­tu­ally mat­ter if the time it takes to get that pointer com­pletely dwarfs the time it would take to just copy over all the el­e­ments in an ar­ray (which is re­ally quite fast).

Unless you have a work­load that is heav­ily dom­i­nated by split­ting and merg­ing costs, the penalty every other op­er­a­tion takes due to caching ef­fects and code com­plex­ity will elim­i­nate any the­o­ret­i­cal gains.

But yes, if you’re pro­fil­ing your ap­pli­ca­tion to spend a lot of time in split­ting and merg­ing, you may have gains in a linked list.

You’ve al­ready en­tered a pretty niche space — most can af­ford amor­ti­za­tion. Still, ar­rays are amor­tized in the worst case. Just be­cause you’re us­ing an ar­ray, does­n’t mean you have amor­tized costs. If you can pre­dict how many el­e­ments you’re go­ing to store (or even have an up­per-bound), you can pre-re­serve all the space you need. In my ex­pe­ri­ence it’s very com­mon to be able to pre­dict how many el­e­ments you’ll need. In Rust in par­tic­u­lar, all it­er­a­tors pro­vide a size_hint for ex­actly this case.

Then push and pop will be truly O(1) op­er­a­tions. And they’re go­ing to be

con­sid­er­ably faster than push and pop on linked list. You do a pointer off­set, write the bytes, and in­cre­ment an in­te­ger. No need to go to any kind of al­lo­ca­tor.

But yes, if you can’t pre­dict your load, there are worst-case la­tency sav­ings to be had!

Well, this is com­pli­cated. A standard” ar­ray re­siz­ing strat­egy is to grow or shrink so that at most half the ar­ray is empty. This is in­deed a lot of wasted space. Especially in Rust, we don’t au­to­mat­i­cally shrink col­lec­tions (it’s a waste if you’re just go­ing to fill it back up again), so the wastage can ap­proach in­fin­ity!

But this is a worst-case sce­nario. In the best-case, an ar­ray stack only has three point­ers of over­head for the en­tire ar­ray. Basically no over­head.

Linked lists on the other hand un­con­di­tion­ally waste space per el­e­ment. A singly-linked lists wastes one pointer while a dou­bly-linked list wastes two. Unlike an ar­ray, the rel­a­tive wasteage is pro­por­tional to the size of the el­e­ment. If you have huge el­e­ments this ap­proaches 0 waste. If you have tiny el­e­ments (say, bytes), then this can be as much as 16x mem­ory over­head (8x on 32-bit)!

Actually, it’s more like 23x (11x on 32-bit) be­cause padding will be added to the byte to align the whole node’s size to a pointer.

This is also as­sum­ing the best-case for your al­lo­ca­tor: that al­lo­cat­ing and deal­lo­cat­ing nodes is be­ing done densely and you’re not los­ing mem­ory to frag­men­ta­tion.

But yes, if you have huge el­e­ments, can’t pre­dict your load, and have a de­cent al­lo­ca­tor, there are mem­ory sav­ings to be had!

Great! Linked lists are su­per el­e­gant to use in func­tional lan­guages be­cause you can ma­nip­u­late them with­out any mu­ta­tion, can de­scribe them re­cur­sively, and also work with in­fi­nite lists due to the magic of lazi­ness.

Specifically, linked lists are nice be­cause they rep­re­sent an it­er­a­tion with­out the need for any mu­ta­ble state. The next step is just vis­it­ing the next sub­list.

However it should be noted that Rust can pat­tern match on ar­rays and talk about sub-ar­rays us­ing slices! It’s ac­tu­ally even more ex­pres­sive than a func­tional list in some re­gards be­cause you can talk about the last el­e­ment or even the ar­ray with­out the first and last two el­e­ments” or what­ever other crazy thing you want.

It is true that you can’t build a list us­ing slices. You can only tear them apart.

For lazi­ness we in­stead have it­er­a­tors. These can be in­fi­nite and you can map, fil­ter, re­verse, and con­cate­nate them just like a func­tional list, and it will all be done just as lazily. No sur­prise here: slices can also be co­erced to an it­er­a­tor.

But yes, if you’re lim­ited to im­mutable se­man­tics, linked lists can be very nice.

Note that I’m not say­ing that func­tional pro­gram­ming is nec­es­sar­ily weak or bad. However it is fun­da­men­tally se­man­ti­cally lim­ited: you’re largely only al­lowed to talk about how things are, and not how they should be done. This is ac­tu­ally a fea­ture, be­cause it en­ables the com­piler to do tons of ex­otic

trans­for­ma­tions and po­ten­tially fig­ure out the best way to do things with­out you hav­ing to worry about it. However this comes at the cost of be­ing

able to worry about it. There are usu­ally es­cape hatches, but at some limit you’re just writ­ing pro­ce­dural code again.

Even in func­tional lan­guages, you should en­deav­our to use the ap­pro­pri­ate data struc­ture for the job when you ac­tu­ally need a data struc­ture. Yes, singly-linked lists are your pri­mary tool for con­trol flow, but they’re a re­ally poor way to ac­tu­ally store a bunch of data and query it.

Yes! Although writ­ing a con­cur­rent data struc­ture is re­ally a whole dif­fer­ent beast, and is­n’t some­thing that should be taken lightly. Certainly not some­thing many peo­ple will even con­sider do­ing. Once one’s been writ­ten, you’re also not re­ally choos­ing to use a linked list. You’re choos­ing to use an MPSC queue or what­ever. The im­ple­men­ta­tion strat­egy is pretty far re­moved in this case!

But yes, linked lists are the de­facto he­roes of the dark world of lock-free con­cur­rency.

It’s niche. You’re talk­ing about a sit­u­a­tion where you’re not even us­ing your lan­guage’s run­time. Is that not a red flag that you’re do­ing some­thing strange?

But sure. Build your awe­some zero-al­lo­ca­tion lists on the stack.

That’s a del­i­cate dance you’re play­ing. Especially if you don’t have a garbage col­lec­tor. I might ar­gue that your con­trol flow and own­er­ship pat­terns are prob­a­bly a bit too tan­gled, de­pend­ing on the de­tails.

But yes, you can do some re­ally cool crazy stuff with cur­sors.

Well, yeah. You’re read­ing a book ded­i­cated to that premise. Well, singly-linked lists are pretty sim­ple. Doubly-linked lists can get kinda gnarly, as we’ll see.

Ok. That’s out of the way. Let’s write a ba­jil­lion linked lists.

On to the first chap­ter!

...

Read the original on rust-unofficial.github.io »

7 227 shares, 28 trendiness, 543 words and 8 minutes reading time

Studio Ghibli suddenly makes 38 albums of anime music available on Spotify, Apple Music, and more

693-track se­lec­tion in­cludes many of ani­me’s most mem­o­rable pieces of mu­sic.

There’s noth­ing in the world of an­i­ma­tion that looks quite like the beau­ti­ful vi­su­als in a Studio Ghibli movie, and fans love go­ing back to re-watch the vi­su­als of their fa­vorite scenes and se­quences. But the films’ mu­sic is just as mem­o­rable, as any­one who’s ever caught them­selves hum­ming the theme to My Neighbor Totoro or Spirited Away knows.

So it was a sud­den but ex­tremely happy treat for us to learn that as of February 21, mu­sic from the col­lected films of Studio Ghibli is now avail­able on mu­sic stream­ing sub­scrip­tion ser­vices Spotify, Apple Music, Amazon Music, Google Play Music, and YouTube Music. Don’t worry, ei­ther, be­cause the fa­mously lud­dite-lean­ing stu­dio is­n’t lim­it­ing the on­line se­lec­tion to a hand­ful of its great­est hits, ei­ther, as it’s made a whop­ping 38 al­bums avail­able, com­prised of 693 tracks.

So what are those 38 al­bums? To start with, the first 23 are the sound­tracks for every Studio Ghibli movie ever made ex­cept Grave of the Fireflies (which the stu­dio does­n’t con­trol the rights to), plus French co-pro­duc­tion The Red Turtle.

● Nausicaä of the Valley of the Wind Soundtrack-To the Far-off Land

● Castle in the Sky Laputa Soundtrack-Mystery of the Levistone

● My Neighbor Totoro Soundtrack Collection

● Kiki’s Delivery Service Soundtrack Collection

● Only Yesterday Original Soundtrack

● Porco Rosso Soundtrack

● Ocean Waves Soundtrack

● Pom Poko Soundtrack

● Whisper of the Heart Soundtrack

● Princess Mononoke Soundtrack

● My Neighbors the Yamadas Original Full Soundtrack

● Spirited Away Soundtrack

● The Cat Returns Soundtrack

● Ghiblies Episode 2 Soundtrack

● Howl’s Moving Castle Soundtrack

● Tales from Earthsea Soundtrack

● Ponyo Soundtrack

● Arrietty Soundtrack

● From Up on Poppy Hill Soundtrack

● The Wind Rises Soundtrack

● The Tale of the Princess Kaguya Soundtrack

● When Marnie Was There Soundtrack

● The Red Turtle Soundtrack

There’re also 14 im­age al­bums, with mu­sic in­spired by some of Ghibli’s most pop­u­lar films, and fi­nally a dou­ble-sized col­lec­tion of vo­cal songs from the stu­dio’s movies.

● Nausicaä of the Valley of the Wind Soundtrack-Bird People

● Castle in the Sky Laputa-The Girl Who Fell From the Sky

● My Neighbor Totoro Image Song Collection

● Kiki’s Delivery Service Image Album

● Only Yesterday Image Album

● Porco Rosso Image Album

● Pom Poko Image Album

● Whisper of the Heart Image Album

● Princess Mononoke Image Album

● Spirited Away Image Album

● Image Symphonic Suite Howl’s Moving Castle Soundtrack

● Ponyo Image Album

● From Up on Poppy Hill Image Album-Piano Sketch Collection

● The Tale of the Princess Kaguya Soundtrack-Songs for Female Chorus Trio

● Studio Ghibli Songs-Expanded Edition

Between this and Ghibli’s re­cent anime stream­ing agree­ments with Netflix and HBO, it looks like the stu­dio is fi­nally mov­ing past its re­luc­tance to dig­i­tally dis­trib­ute its art, though you’ll still have to make an ac­tual trip to the the­ater in­side the Ghibli Museum in Tokyo if you want to see the Totoro mini-se­quel or the lat­est thing Hayao Miyazaki has di­rected.

Sources: Tokuma Japan Communications, Spotify

Top im­age: Spotify(iOS)

Insert im­ages: Spotify(iOS), SoraNews24

● Want to hear about SoraNews24’s lat­est ar­ti­cles as soon as they’re pub­lished? Follow us on Facebook and Twitter!

[ Read in Japanese ]

...

Read the original on soranews24.com »

8 225 shares, 11 trendiness, 112 words and 1 minutes reading time

zeit/next.js

Sign in

Sign up

FAQ

Recently up­dated

Least re­cently up­dated

Newest

Oldest

Most Comment

Least com­mented

Most votes

Least votes

Companies / Sites us­ing Next.js

Next.js dis­cus­sions!!

Best links and re­sources about Next.js

Is this the Next.js Spectrum re­place­ment?

Is pos­si­ble to pre­fill the React Context API in the ge­tIni­tial­Props step?

Common chunks in server­less page bun­dles

What is the point of SSR these days?

Is the basePath fea­ture closer to be­com­ing a re­al­ity?

You can’t per­form that ac­tion at this time.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

...

Read the original on github.com »

9 216 shares, 16 trendiness, 2930 words and 30 minutes reading time

OCaml 4.10 released

We have the plea­sure of cel­e­brat­ing the birth­day of Francis Ronalds

by an­nounc­ing the re­lease of OCaml ver­sion 4.10.0.

Some of the high­lights in this re­lease are:

* A new best-fit al­lo­ca­tor for the ma­jor heap which re­duces both GC cost an

mem­ory us­age.

* Immutable strings are now en­forced at con­fig­u­ra­tion time

* Coming soon: stat­mem­prof, a new sta­tis­ti­cal mem­ory pro­filer.

The ex­ter­nal API will be re­lease next ver­sion.

* Various im­prove­ments to the man­ual

Merlin, the OCaml ed­i­tor ser­vice, is not yet avail­able for this re­lease.

We will pub­lish a fol­low-up an­nounce­ment when Merlin is ready.

This re­lease is (or soon will be) avail­able as a set of OPAM switches,

and as a source down­load here:

#7757, #1726: multi-in­dices for ex­tended in­dex­ing op­er­a­tors:

a.%{0;1;2} desug­ars to ( .%{ ;.. } ) a [|0;1;2|]

(Florian Angeletti, re­view by Gabriel Radanne)

* [breaking change] #1859, #9117: en­force safe (immutable) strings by re­mov­ing

the -unsafe-string op­tion by de­fault. This can be over­rid­den by

a con­fig­ure-time op­tion (available since 4.04 in 2016):

–disable-force-safe-string since 4.08, -no-force-safe-since

be­tween 4.07 and 4.04.

In the force-safe-string mode (now the de­fault), the re­turn type of the

String_val macro in C stubs is const char* in­stead of

char*. This change may break C FFI code.

(Kate Deplaix)

#6662, #8908: al­low writ­ing module _ = E” to ig­nore mod­ule ex­pres­sions

(Thomas Refis, re­view by Gabriel Radanne)

#8809, #9292: Add a best-fit al­lo­ca­tor for the ma­jor heap; still

ex­per­i­men­tal, it should be much bet­ter than cur­rent al­lo­ca­tion

poli­cies (first-fit and next-fit) for pro­grams with large heaps,

re­duc­ing both GC cost and mem­ory us­age.

This new best-fit is not (yet) the de­fault; set it ex­plic­itly with

OCAMLRUNPARAM=“a=2” (or Gc.set from the pro­gram). You may also want

to in­crease the space_over­head pa­ra­me­ter of the GC (a per­cent­age,

80 by de­fault), for ex­am­ple OCAMLRUNPARAM=“o=85”, for op­ti­mal

speed.

(Damien Doligez, re­view by Stephen Dolan, Jacques-Henri Jourdan,

Xavier Leroy, Leo White)

[breaking change] #8713, #8940, #9115, #9143, #9202, #9251:

Introduce a state table in the run­time to con­tain the global vari­ables.

(The Multicore run­time will have one such state for each do­main.)

This changes the sta­tus of some in­ter­nal vari­ables of the OCaml run­time;

in many cases the header file orig­i­nally defin­ing the in­ter­nal vari­able

pro­vides a com­pat­i­bil­ity macro with the old name, but pro­grams

re-defin­ing those vari­ables by hand need to be fixed.

#8993: New C func­tions cam­l_process_pend­ing_ac­tions{,_exn} in

caml/​sig­nals.h, in­tended for ex­e­cut­ing all pend­ing ac­tions in­side

long-run­ning C func­tions (requested mi­nor and ma­jor col­lec­tions,

sig­nal han­dlers, fi­nalis­ers, and mem­prof call­backs). The func­tion

cam­l_process_pend­ing_ac­tion­s_exn re­turns any ex­cep­tion aris­ing

dur­ing their ex­e­cu­tion, al­low­ing re­sources to be cleaned-up be­fore

re-rais­ing.

(Guillaume Munch-Maccagnoni, re­view by Jacques-Henri Jourdan,

Stephen Dolan, and Gabriel Scherer)

[breaking change] #8691, #8897, #9027: Allocation func­tions are now guar­an­teed not to

trig­ger any OCaml call­back when called from C. In long-run­ning C

func­tions, this can be re­placed with calls to

cam­l_process_pend­ing_ac­tions at safe points.

Side ef­fect of this change: in byte­code mode, polling for

asyn­chro­nous call­backs is per­formed at every mi­nor heap al­lo­ca­tion,

in ad­di­tion to func­tion calls and loops as in pre­vi­ous OCaml

re­leases.

(Jacques-Henri Jourdan, re­view by Stephen Dolan, Gabriel Scherer and

Guillaume Munch-Maccagnoni)

[breaking change] #9037: cam­l_check­_ur­gen­t_gc is now guar­an­teed not to trig­ger any

fi­naliser. In long-run­ning C func­tions, this can be re­placed

with calls to cam­l_process_pend­ing_ac­tions at safe points.

(Guillaume Munch-Maccagnoni, re­view by Jacques-Henri Jourdan and

Stephen Dolan)

#8619: Ensure Gc.minor_words re­mains ac­cu­rate af­ter a GC.

(Stephen Dolan, Xavier Leroy and David Allsopp,

re­view by Xavier Leroy and Gabriel Scherer)

#8670: Fix stack over­flow de­tec­tion with sys­threads

(Stephen Dolan, re­view by Xavier Leroy, Anil Madhavapeddy, Gabriel Scherer,

Frédéric Bour and Guillaume Munch-Maccagnoni)

* [breaking change] #8711: The ma­jor GC hooks are no longer al­lowed to in­ter­act with the

OCaml heap.

(Jacques-Henri Jourdan, re­view by Damien Doligez)

#8630: Use abort() in­stead of exit(2) in cam­l_­fa­tal_er­ror, and add

the new hook cam­l_­fa­tal_er­ror_hook.

(Jacques-Henri Jourdan, re­view by Xavier Leroy)

#8641: Better call stacks when a C call is in­volved in byte code mode

(Jacques-Henri Jourdan, re­view by Xavier Leroy)

#8634, #8668, #8684, #9103 (originally #847): Statistical mem­ory pro­fil­ing.

In OCaml 4.10, sup­port for al­lo­ca­tions in the mi­nor heap in na­tive

mode is not avail­able, and call­backs for pro­mo­tions and

deal­lo­ca­tions are not avail­able.

Hence, there is not any pub­lic API for this fea­ture yet.

(Jacques-Henri Jourdan, re­view by Stephen Dolan, Gabriel Scherer

and Damien Doligez)

#9268, #9271: Fix byte­code back­trace gen­er­a­tion with large in­te­gers pre­sent.

(Stephen Dolan and Mark Shinwell, re­view by Gabriel Scherer and

Jacques-Henri Jourdan)

#7672, #1492: Add Filename.quote_command to pro­duce prop­erly-quoted

com­mands for ex­e­cu­tion by Sys.command.

(Xavier Leroy, re­view by David Allsopp and Damien Doligez)

#8971: Add Filename.null, the con­ven­tional name of the null” de­vice.

(Nicolás Ojeda Bär, re­view by Xavier Leroy and Alain Frisch)

#8651: add %#F’ mod­i­fier in printf to out­put OCaml float con­stants

in hexa­dec­i­mal

...

Read the original on discuss.ocaml.org »

10 212 shares, 15 trendiness, 91 words and 1 minutes reading time

norvig/pytudes

Sign in

Sign up

Permalink

GitHub is home to over 40 mil­lion de­vel­op­ers work­ing to­gether to host and re­view code, man­age pro­jects, and build soft­ware to­gether.

Sign up

Users who have con­tributed to this file

Sorry, some­thing went wrong. Reload?

Sorry, we can­not dis­play this file.

Sorry, this file is in­valid so it can­not be dis­played.

You can’t per­form that ac­tion at this time.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.