10 interesting stories served every morning and every evening.




1 1,585 shares, 57 trendiness

Thanks FedEx, This is Why we Keep Getting Phished

I’ve been get­ting a lot of those your par­cel could­n’t be de­liv­ered” phish­ing at­tacks lately and if you’re a hu­man with a phone, you prob­a­bly have been too. Just as a brief re­minder, they look like this:

These get through all the tech­ni­cal con­trols that ex­ist at my telco and they land smack bang in my SMS in­box. However, I don’t fall for the scams be­cause I look for the warn­ing signs: a sense of ur­gency, fear of miss­ing out, and strange URLs that look noth­ing like any par­cel de­liv­ery ser­vice I know of. They have a pretty rough go of con­vinc­ing me they’re from Australia Post by putting auspost” some­where or other within each link, but I’m a smart hu­man so I don’t fall for this (that’s a joke, read why hu­mans are bad at URLs).

However… I am ex­pect­ing a par­cel. It’s well into the 2020′s and post COVID so I’m al­ways ex­pect­ing a par­cel, be­cause that’s just how we buy stuff these days. And so, when I re­ceived the fol­low­ing SMS ear­lier this week I was ex­pect­ing a par­cel and I was ex­pect­ing phish­ing at­tacks:

So… which is it? Parcel or phish? Let’s see what the peo­ple say:

Referring to the par­ent tweet, is this mes­sage le­git and should I pay the duty and taxes?— Troy Hunt (@troyhunt) February 20, 2024

Whoa - that’s an 87% dodgy AF vote from over 4,000 re­spon­dents so yeah, that’s pretty em­phatic. Why such an over­whelm­ingly sus­pi­cious crowd? Let’s break that mes­sage down into 7 dodgy AF signs:

Phishers com­monly make ty­pos in their mes­sag­ing and I know FedEx” al­ways cap­i­talises the E”. And what’s with the -Exp”? Dodgy AF!Why does the ship­ment num­ber look so short? And why is it iden­ti­cal to the re­quested pay­ment be­low? Dodgy AF!Ah, so it’s ur­gent is it? Urgency is a core tenet of so­cial en­gi­neer­ing as it en­cour­ages peo­ple to act with­out prop­erly think­ing it though. Dodgy AF!Why are the D” and the T” cap­i­talised? Dodgy AF!This is a US-headquartered global de­liv­ery par­cel ser­vice, why aren’t they telling me the cur­rency? Or even us­ing a dol­lar sign? Dodgy AF!Does this even need ex­plain­ing? What’s this bpoint.com.au” ser­vice? It’s def­i­nitely not a FedEx do­main nor an Aussie gov one if we’re talk­ing duty and taxes. Dodgy AF!So… you’re go­ing to give me the con­tact de­tails for any query” (not queries”, so there’s an­other gram­mat­i­cal red flag), the very prac­tice we’re now mov­ing away from for one sim­ple rea­son: be­cause it’s dodgy AF!

And so, I was with the 87% of other peo­ple. However… I was ex­pect­ing a pack­age. From FedEx. Coming from out­side Australia so it may at­tract duty and taxes. And I re­ally want to get this pack­age be­cause it’s a new 3D printer from Prusa, and they’re awe­some!

There’s a sage piece of ad­vice that’s al­ways rel­e­vant in these cases and it’s very sim­ple: if in doubt, go the web­site in ques­tion and ver­ify the re­quest your­self. So, I went to the pur­chase con­fir­ma­tion from Prusa, found the ship­ping de­tails and fol­lowed the link to the FedEx web­site. Now it was sim­ply a mat­ter of find­ing the sec­tion that talks about tax, ex­cept…

I went all through that page and could­n’t find a sin­gle ref­er­ence to duty, nor for any­thing tax re­lated. Try as I might, I could­n’t es­tab­lish the au­then­tic­ity of the SMS by go­ing di­rectly to the (alleged) source. But what I could eas­ily es­tab­lish is that if you fol­low that link in the SMS, you can change the track­ing num­ber, the cus­tomer name and the amount to ab­solutely any­thing you want!

This is all done by sim­ply chang­ing the URL pa­ra­me­ters; I’m not mod­i­fy­ing the browser DOM or in­ter­cept­ing traf­fic or do­ing any­thing fancy, it’s lit­er­ally just query string pa­ra­me­ter tam­per­ing re­flected XSS style. This feels like every phish­ing site ever, not a pay­ment ser­vice run by Australia’s largest bank. Seriously, BPOINT is pro­vided by the Commonwealth Bank and af­ter the ex­pe­ri­ence above, I’m at the point of reach­ing out to them and mak­ing a dis­clo­sure. Except that this is how the sys­tem was ob­vi­ously de­signed to work and it’s a com­pletely par­al­lel is­sue to phishy FedEx SMSs. Speaking of which, the very next morn­ing I got an­other one from the same sender:

I don’t know if this makes it bet­ter or worse 🤦‍♂️ Let’s just jump into the high­lights, both good and bad:

My ship­ping num­ber is now ac­tu­ally in the text of the email - yay!The words duty” and taxes” are now rep­re­sented in the cor­rect case - yay!The words PAY NOW are cap­i­talised which seems… dodgy AF!And my favourite bit of all: the link” is­n’t ac­tu­ally a link at all be­cause it con­tains no scheme, no do­main and no path, just the query string pa­ra­me­ters! Dodgy AF!

It’s quite un­be­liev­able what they’ve done with the link be­cause it makes the SMS en­tirely un­ac­tion­able. It’s im­pos­si­ble to click any­where and pay the money. And while I’m here, why are all the query string pa­ra­me­ter names now cap­i­talised? It’s like there’s a com­pletely dif­fer­ent (broken) process some­where gen­er­at­ing these links. Or scam­mers just aren’t con­sis­tent…

Because dodgy AF is the pre­vail­ing theme, I needed to dig deeper, so I searched for the 1800 num­ber. One of the first re­sults was for a Reverse Australia page for that num­ber which upon read­ing the first 3 com­ments, per­fectly summed up the sen­ti­ment so far:

And the more you read both on that site and other top links in the search re­sults, the more peo­ple are to­tally con­fused about the le­git­i­macy of the mes­sages. There’s only one thing to do - call FedEx. Not by the num­ber in the (still po­ten­tially phishy) SMS, but rather via the num­ber on their web­site. So, click the Support” menu item, down to Customer Support” and we end up here:

I’ll save you the pain of read­ing the re­sponse that en­sued, suf­fice to say that it only re­ferred to email com­mu­ni­ca­tions and boiled down to sug­gest­ing you read the do­main of the sender. But I did man­age to pin the sys­tem down on a phone num­ber which as you’ll see, is com­pletely dif­fer­ent to the one in the SMS mes­sages:

So, I call the num­ber and fol­low the voice prompts, se­lect­ing op­tions via the key­pad to route me through to the duty and taxes sec­tion. But even­tu­ally, sev­eral steps deep into the process, the sys­tem stops re­spond­ing to key presses! 1” does­n’t work and nei­ther does 2″ so with­out a re­sponse, the same mes­sage just re­peats. But it does of­fer an al­ter­na­tive and sug­ges­tions I call 132610. That’s the num­ber I called in the first place to get stuck in this in­fi­nite loop!

I try again, this time fol­low­ing a dif­fer­ent se­ries of prompts that even­tu­ally asks for a track­ing num­ber and then pro­ceeds to tell me pre­cisely what the web­site al­ready does! But it also pro­vides the op­tion to speak to a cus­tomer ser­vice op­er­a­tor and I’m ac­tu­ally promptly put through. The op­er­a­tor ex­plains that my ship­ment is val­ued at US$799 which con­verts to AU$1,215.97 and it there­fore sub­ject to some in­bound fees. Great, but how much and does it match what’s in the phishy SMSs I’ve re­ceived?” He promises some­one will call be back shortly…

And then, out of the blue 3 days af­ter the ini­tial phishy SMS ar­rived, an email landed in my in­box:

The dol­lar fig­ure, the BPOINT ad­dress and the mes­sag­ing all lined up with the SMSs, but that’s just merely cor­re­la­tion and if some­one had both my phone num­ber and email ad­dress they could eas­ily at­tempt to phish both with the same de­tails. But then, I looked at the at­tach­ment to the email and found this:

My com­plete Prusa in­voice was at­tached along with the or­der num­ber, price and ship­ping de­tails. In other words, 87% of you were wrong 😲

On a more se­ri­ous note, Aussies alone are los­ing north of AU$3B an­nu­ally to scams, and that’s ob­vi­ously only a drop in the ocean com­pared to the global scale of this prob­lem. Our Australian Communications and Media Authority body (ACMA) re­cently re­ported 336M blocked scam SMSs and tech­ni­cal con­trols like these are ob­vi­ously great, but ab­sent from their re­port­ing was the num­ber of scam mes­sages they did­n’t block. There’s an easy ex­pla­na­tion for this omis­sion: they sim­ply don’t know how many are sent. But if I were to take a guess, they’ve merely blocked the tip of the ice­berg. This is why in ad­di­tion to tech­ni­cal con­trols, we re­ply on hu­man con­trols which means help­ing peo­ple iden­tify the pat­terns of a scam: re­quests for money, a sense of ur­gency, gram­mar and cas­ing that’s a bit off, odd look­ing URLs. You know, stuff like this:

What makes this sit­u­a­tion so ridicu­lous is that while we’re all watch­ing for scam­mers at­tempt­ing to im­i­tate le­git­i­mate or­gan­i­sa­tions, FedEx is out there im­i­tat­ing scam­mers! Here we are in the era of bur­geon­ing AI-driven scams that are be­com­ing in­creas­ingly hard for hu­mans to iden­tify, and FedEx is like here, hold my beer” as they one-up the scam­mers at their own game and do a per­fect job of be­ing com­pletely in­dis­tin­guish­able from them.

Ah well, as I ul­ti­mately lament in these sit­u­a­tions, it’s a good time to be in the in­dus­try 😊

...

Read the original on www.troyhunt.com »

2 1,410 shares, 57 trendiness

Save Flipper

Vehicle theft is an is­sue that af­fects us all col­lec­tively. As cy­ber­se­cu­rity and tech­nol­ogy pro­fes­sion­als, we rec­og­nize the im­por­tance of act­ing rapidly to re­duce its im­pact on Canadians. That be­ing said, we be­lieve the fed­eral gov­ern­men­t’s pro­posal, par­tic­u­larly the pro­hi­bi­tion of se­cu­rity re­search tools, is ill-ad­vised, over­broad and most im­por­tantly, will be coun­ter­pro­duc­tive.

Innovation, Science and Economic Development Canada (ISED) will pur­sue all av­enues to ban de­vices used to steal ve­hi­cles by copy­ing the wire­less sig­nals for re­mote key­less en­try, such as the Flipper Zero, which would al­low for the re­moval of those de­vices from the Canadian mar­ket­place through col­lab­o­ra­tion with law en­force­ment agen­cies.

This pol­icy is based on out­dated and mis­in­formed tech­no­log­i­cal as­sump­tions, mak­ing it un­fea­si­ble to im­ple­ment and en­force. Security tools like Flipper Zero are es­sen­tially pro­gram­ma­ble ra­dios, known as Software Defined Radios (SDRs), a tech­nol­ogy which has ex­isted for years, and in some cases can be built us­ing open-source or sim­ple over-the-shelf-com­po­nents. These ra­dios are also fun­da­men­tally the same as those used in nu­mer­ous de­vices across var­i­ous sec­tors, in­clud­ing smart house­hold ap­pli­ances, drones and aero­space tech­nolo­gies, mo­bile phones and net­works, as well as in­dus­trial con­trol sys­tems. Consequently, pro­hibit­ing such func­tion­al­ity is vir­tu­ally im­pos­si­ble and could sti­fle the Canadian econ­omy sig­nif­i­cantly.

This pol­icy fails to rec­og­nize that these tools are not the en­emy, rather, in­se­cure prod­ucts are. Unlike decades ago when the in­dus­try re­lied on se­cu­rity through ob­scu­rity as a strat­egy, we now can at­test that the de­moc­ra­ti­za­tion of se­cu­rity re­search tools is a bal­anc­ing force for man­u­fac­tur­ers to im­prove the safety of their prod­ucts. Today, many in­dus­try ac­tors rely on such re­search, just like we have Federal & Provincial gov­ern­ment pro­grams that sup­port & re­ward se­cu­rity vul­ner­a­bil­ity dis­clo­sure that ben­e­fits us all. Implementing such a pol­icy would have a chill­ing ef­fect on these ef­forts, po­ten­tially un­der­min­ing their pos­i­tive im­pact on so­ci­ety.

Additionally, with bills such as C-244 (Right to Repair), that re­cently passed unan­i­mously and C-294 (Interoperability) that gath­ered sup­port from mul­ti­ple par­ties, we be­lieve this over­broad pol­icy will pe­nal­ize le­git­i­mate analy­sis and re­pair use-cases, that were just made avail­able to cana­di­ans.

Because of the ar­bi­trary na­ture of such a pol­icy, we be­lieve the ju­di­ciary sys­tem will be faced with a slew of liti­gious cases around the many uses of these se­cu­rity tools. Instead, these re­sources could be fo­cused on cre­at­ing con­struc­tive com­mu­ni­ca­tion chan­nels be­tween cy­ber­se­cu­rity ex­perts, car man­u­fac­tur­ers, in­sur­ers, and the ju­di­ciary sys­tem to iden­tify ways to im­prove the se­cu­rity of au­to­mo­tive key­less en­try and push-to-start sys­tems, and en­force min­i­mal lev­els of se­cu­rity for fu­ture prod­ucts, as it is the case in other in­dus­tries.

Cette poli­tique est basée sur des hy­pothèses tech­nologiques ob­solètes et mal in­for­mées, ren­dant son ap­pli­ca­tion et son exé­cu­tion ir­réal­is­ables. Les out­ils de sécu­rité comme le Flipper Zero sont es­sen­tielle­ment des ra­dios pro­gram­ma­bles, con­nues sous le nom de Radios Définies par Logiciel, ou Software Defined Radios en anglais, (SDR), une tech­nolo­gie qui ex­iste depuis des an­nées et qui, dans cer­tains cas, peut être con­stru­ite en util­isant des com­posants open-source ou sim­ples disponibles dans le com­merce. Ces ra­dios sont égale­ment fon­da­men­tale­ment les mêmes que celles util­isées dans de nom­breux ap­pareils à tra­vers divers secteurs, y com­pris les ap­pareils mé­nagers in­tel­li­gents, les drones et les tech­nolo­gies aérospa­tiales, les télé­phones mo­biles et les réseaux, ainsi que les sys­tèmes de con­trôle in­dus­triel. Par con­séquent, in­ter­dire une telle fonc­tion­nal­ité est pra­tique­ment im­pos­si­ble et pour­rait étouf­fer con­sid­érable­ment l’é­conomie cana­di­enne.

Cette poli­tique échoue à re­con­naître que ces out­ils ne sont pas l’en­nemi, mais plutôt que les pro­duits non sécurisés le sont. Contrairement à il y a des dé­cen­nies, où l’in­dus­trie s’ap­puyait sur la sécu­rité par l’ob­scu­rité comme stratégie, nous pou­vons main­tenant at­tester que la dé­moc­ra­ti­sa­tion des out­ils de recherche en sécu­rité est une force d’équili­bre pour les fab­ri­cants pour améliorer la sécu­rité de leurs pro­duits. Aujourd’hui, de nom­breux ac­teurs de l’in­dus­trie s’ap­puient sur une telle recherche, tout comme nous avons des pro­grammes gou­verne­men­taux Fédéraux et Provinciaux qui sou­ti­en­nent et ré­com­pensent la di­vul­ga­tion de vul­néra­bil­ités de sécu­rité qui nous prof­i­tent tous. La mise en œu­vre d’une telle poli­tique au­rait un ef­fet paralysant sur ces ef­forts, com­pro­met­tant po­ten­tielle­ment leur im­pact posi­tif sur la so­ciété.

De plus, avec des pro­jets de loi tels que C-244 (Droit à la ré­pa­ra­tion), qui a récem­ment été adopté à l’u­na­nim­ité et C-294 (Interopérabilité) qui a re­cueilli le sou­tien de plusieurs par­tis, nous pen­sons que cette poli­tique trop générale pé­nalis­era les analy­ses légitimes et les cas d’u­til­i­sa­tion de ré­pa­ra­tion, qui vi­en­nent juste d’être ren­dus disponibles aux Canadiens.

En rai­son de la na­ture ar­bi­traire d’une telle poli­tique, nous croyons que le sys­tème ju­di­ci­aire sera con­fronté à une mul­ti­tude de cas litigieux con­cer­nant les nom­breuses util­i­sa­tions de ces out­ils de sécu­rité. Au lieu de cela, ces ressources pour­raient être con­cen­trées sur la créa­tion de canaux de com­mu­ni­ca­tion con­struc­tifs en­tre les ex­perts en cy­ber­sécu­rité, les fab­ri­cants de voitures, les as­sureurs et le sys­tème ju­di­ci­aire pour iden­ti­fier des moyens d’améliorer la sécu­rité des sys­tèmes d’en­trée sans clé et de dé­mar­rage par bou­ton-pous­soir des au­to­mo­biles, et im­poser des niveaux de sécu­rité min­i­maux pour les fu­turs pro­duits, comme c’est le cas dans d’autres in­dus­tries.

If you agree with this let­ter, please sign it and share it. Signatures are added to the site roughly once a day, man­u­ally.

...

Read the original on saveflipper.ca »

3 1,350 shares, 53 trendiness

Institutions Try to Preserve the Problem to Which They Are the Solution

The Shirky prin­ci­ple is the adage that institutions will try to pre­serve the prob­lem to which they are the so­lu­tion”. More broadly, it can also be char­ac­ter­ized as the adage that every en­tity tends to pro­long the prob­lem it is solv­ing”.

For ex­am­ple, the Shirky prin­ci­ple means that a gov­ern­ment agency that’s meant to ad­dress a cer­tain so­ci­etal is­sue may hin­der at­tempts by oth­ers to ad­dress the is­sue, in or­der to en­sure that the agency re­mains rel­e­vant. Alternatively, the agency may be­come so fo­cused on the cur­rent way in which it ad­dresses the is­sue that it will fail to adopt bet­ter new so­lu­tions as they be­come avail­able, thus pro­long­ing the is­sue.

The Shirky prin­ci­ple has im­por­tant im­pli­ca­tions in var­i­ous do­mains, so it’s im­por­tant to un­der­stand it. As such, in the fol­low­ing ar­ti­cle you will learn more about this prin­ci­ple, and see what you can do about it in prac­tice.

An ex­am­ple of the Shirky prin­ci­ple are tax-fil­ing com­pa­nies who lobby the gov­ern­ment to pre­vent it from of­fer­ing a free and easy way to file taxes, to en­sure that the com­pa­nies can con­tinue to make a profit. A sim­i­lar ex­am­ple of this are pri­vate prison com­pa­nies who lobby the gov­ern­ment to sup­port poli­cies that in­crease the num­ber of in­car­cer­ated peo­ple and the du­ra­tion of their in­car­cer­a­tion.

Another well-known ex­am­ple of the Shirky prin­ci­ple is de­scribed in Cognitive Surplus”, a book by Clay Shirky that con­tained one of the first dis­cus­sions of this prin­ci­ple:

PickupPal.com is… a car­pool­ing site de­signed to co­or­di­nate dri­vers and rid­ers plan­ning to travel along the same route. In May 2008 the Ontario-based bus com­pany Trentway-Wagar… pe­ti­tioned the Ontario Highway Transport Board (OHTB) to shut PickupPal down on the grounds that, by help­ing co­or­di­nate dri­vers and rid­ers, it worked too well to be a car­pool. Trentway-Wagar in­voked Section 11 of the Ontario Public Vehicles Act, which stip­u­lated that car­pool­ing could hap­pen only be­tween home and work (rather than, say, school or hos­pi­tal.) It had to hap­pen within mu­nic­i­pal lines. It had to in­volve the same dri­ver each day. And gas or travel ex­pense could be re­im­bursed no more fre­quently than weekly.Trent­way-Wa­gar was ar­gu­ing that be­cause car­pool­ing used to be in­con­ve­nient, it should al­ways be in­con­ve­nient, and if that in­con­ve­nience dis­ap­peared, then it should be rein­serted by le­gal fiat. Curiously, an or­ga­ni­za­tion that com­mits to help­ing so­ci­ety man­age a prob­lem also com­mits it­self to the preser­va­tion of that same prob­lem, as its in­sti­tu­tional ex­is­tence hinges on so­ci­ety’s con­tin­ued need for its man­age­ment. Bus com­pa­nies pro­vide a crit­i­cal ser­vice—pub­lic trans­porta­tion—but they also com­mit them­selves, as Trentway-Wagar did, to fend­ing off com­pe­ti­tion from al­ter­na­tive ways of mov­ing peo­ple from one place to an­other.The OHTB up­held Trentway-Wagar’s com­plaint and or­dered PickupPal to stop op­er­at­ing in Ontario. PickupPal de­cided to fight the case—and lost in the hear­ing. But pub­lic at­ten­tion be­came fo­cused on the is­sue, and in a year of high gas prices, bur­geon­ing en­vi­ron­men­tal con­cern, and a fi­nan­cial down­turn, al­most no one took Trentway-Wagar’s side. The pub­lic re­ac­tion, chan­neled through every­thing from an on­line pe­ti­tion to T-shirt sales, had one mes­sage: Save PickupPal. The idea that peo­ple could­n’t use such a ser­vice was too hot for the politi­cians in Ontario to ig­nore. Within weeks of Trentway-Wagar’s vic­tory, the Ontario leg­is­la­ture amended the Public Vehicles Act to make PickupPal le­gal again.”

In ad­di­tion, the Shirky prin­ci­ple can also ap­ply to en­ti­ties other than in­sti­tu­tions. For ex­am­ple, an in­di­vid­ual em­ployee who’s in charge of a cer­tain process in their work­place might re­sist at­tempts to au­to­mate that process, in or­der to en­sure that the em­ployee re­mains nec­es­sary to their em­ployer.

A well-known ex­am­ple of the Shirky ef­fect in this con­text is the co­bra ef­fect. It de­scribes a case where British colo­nial of­fi­cials in Delhi (India), set a bounty on dead co­bras, in or­der to re­duce the co­bra pop­u­la­tion. However, this led cit­i­zens to breed the co­bras for profit, and even­tu­ally to re­lease them when the bounty was can­celed.

A sim­i­lar in­ci­dent oc­curred circa 1902 in Hanoi (Vietnam), which was un­der French colo­nial rule at the time, when French of­fi­cials sought to re­duce the rat pop­u­la­tion in the city:

To fight the in­fes­ta­tion city­wide, the colo­nial ad­min­is­tra­tion added vig­i­lantes to its team of pro­fes­sional killers. Appealing to both civic duty and to the pock­et­book, a one-cent bounty was paid for each rat tail brought to the au­thor­i­ties (it was de­cided that the hand­ing in of an en­tire rat corpse would cre­ate too much of a bur­den for the al­ready taxed mu­nic­i­pal health au­thor­i­ties).Un­for­tu­nately, this scheme back­fired. Despite ini­tial ap­par­ent suc­cess, the au­thor­i­ties soon dis­cov­ered that the best laid plans of mice and men of­ten go awry. As soon the mu­nic­i­pal ad­min­is­tra­tors pub­li­cized the re­ward pro­gram, Vietnamese res­i­dents be­gan to bring in thou­sands of tails. While many desk-bound ad­min­is­tra­tors de­lighted in the num­bers of ap­par­ently elim­i­nated rats, more alert of­fi­cials in the field be­gan to no­tice a dis­turb­ing de­vel­op­ment. There were fre­quent sight­ings of rats with­out tails go­ing about their busi­ness in the city streets. After some per­plex­ity, the au­thor­i­ties re­al­ized that less-than-hon­est but quite re­source­ful char­ac­ters were catch­ing rats, but merely cut­ting off the tails and let­ting the still-liv­ing pests go free (perhaps to breed and pro­duce more valu­able tails).Later, things be­came even more se­ri­ous as health in­spec­tors dis­cov­ered a dis­turb­ing de­vel­op­ment in the sub­urbs of Hanoi. These of­fi­cials found that more en­ter­pris­ing but equally de­cep­tive in­di­vid­u­als were ac­tu­ally rais­ing rats to col­lect the bounty. One can only imag­ine the frus­tra­tion of the mu­nic­i­pal au­thor­i­ties, who re­al­ized that their best ef­forts at dérati­sa­tion [extermination of rats] had ac­tu­ally in­creased the ro­dent pop­u­la­tion by in­di­rectly en­cour­ag­ing rat-farm­ing.”— From Of rats, rice, and race: The great Hanoi rat mas­sacre, an episode in French colo­nial his­tory” (Vann, 2003)

Finally, note that the phe­nom­e­non de­scribed by the Shirky prin­ci­ple—en­ti­ties pro­long­ing a prob­lem to which they are the so­lu­tion—is­n’t nec­es­sar­ily the re­sult of in­ten­tional ac­tions. For ex­am­ple, a com­pany may in­ad­ver­tently per­pet­u­ate the prob­lem that it solves, be­cause its processes are so fo­cused on the mediocre so­lu­tion that they’re cur­rently sell­ing, that they don’t re­al­ize a bet­ter so­lu­tion ex­ists. Similarly, a com­pany may dis­cour­age the use of a cer­tain ap­proach to solv­ing a prob­lem be­cause it pre­vi­ously failed for them, even af­ter tech­no­log­i­cal ad­vance­ments make this ap­proach vi­able.

The Shirky prin­ci­ple was pro­posed in a 2010 blog post by Kevin Kelly, ed­i­tor of Wired mag­a­zine, who based it on the speak­ing and writ­ing of scholar Clay Shirky.

Specifically, Kelly at­trib­uted the adage that Institutions will try to pre­serve the prob­lem to which they are the so­lu­tion” to a state­ment that Shirky made in a re­cent talk, and noted that sim­i­lar state­ments were made by Shirky in an as­so­ci­ated blog post (“The Collapse of Complex Business Models”) and book (“Cognitive Surplus”). There, Shirky states that an or­ga­ni­za­tion that com­mits to help­ing so­ci­ety man­age a prob­lem also com­mits it­self to the preser­va­tion of that same prob­lem, as its in­sti­tu­tional ex­is­tence hinges on so­ci­ety’s con­tin­ued need for its man­age­ment”.

In ad­di­tion to men­tion­ing the key quote that is now known as the Shirky prin­ci­ple, Kelly also says the fol­low­ing in his blog post:

The Shirky Principle de­clares that com­plex so­lu­tions (like a com­pany, or an in­dus­try) can be­come so ded­i­cated to the prob­lem they are the so­lu­tion to, that of­ten they in­ad­ver­tently per­pet­u­ate the prob­lem.”

Later, he also says the fol­low­ing with re­gard to this prin­ci­ple (bold added here for em­pha­sis):

In a strong sense we are de­fined by the prob­lems we are solv­ing. Yin/Yang, prob­lem/​so­lu­tion, both sides form one unit. Because of the Shirky Principle, which says that every en­tity tends to pro­long the prob­lem it is solv­ing, progress some­times de­mands that we let go of prob­lems.”

Essentially, in his writ­ing on the topic, Kelly of­fers three for­mu­la­tions of the Shirky prin­ci­ple, which dif­fer in sub­tle but im­por­tant ways:

* The first for­mu­la­tion—“In­sti­tu­tions will try to pre­serve the prob­lem to which they are the so­lu­tion”—refers to in­sti­tu­tions, and states that they will try to pre­serve prob­lems, which im­plies that they do so in­ten­tion­ally.

* The sec­ond for­mu­la­tion—“Com­plex so­lu­tions (like a com­pany, or an in­dus­try) can be­come so ded­i­cated to the prob­lem they are the so­lu­tion to, that of­ten they in­ad­ver­tently per­pet­u­ate the prob­lem”—refers to com­plex so­lu­tions, and states that they of­ten in­ad­ver­tently per­pet­u­ate the prob­lem, which im­plies that they do so un­in­ten­tion­ally.

* The third for­mu­la­tion—”Every en­tity tends to pro­long the prob­lem it is solv­ing”—refers to en­ti­ties, and states that they tend to pro­long prob­lems, with­out mak­ing any claim about their in­ten­tions.

The first for­mu­la­tion is the one that’s most com­monly used when peo­ple dis­cuss the Shirky prin­ci­ple, though Kelly does not ac­tu­ally re­fer to it as the Shirky prin­ci­ple in his orig­i­nal blog post. The third for­mu­la­tion, on the other hand, is the most gen­eral, though one is­sue with it is that it states that every” en­tity en­gages in this kind of be­hav­ior, which is too ab­solute of a claim. However, this is­sue can be ad­dressed by slightly chang­ing this for­mu­la­tion, into entities tend to pro­long the prob­lems they are solv­ing”.

Note: In his post, Kelly states that Shirky’s ob­ser­va­tion re­minds him of the clar­ity of the Peter Principle, which says that a per­son in an or­ga­ni­za­tion will be pro­moted to the level of their in­com­pe­tence. At which point their past achieve­ments will pre­vent them from be­ing fired, but their in­com­pe­tence at this new level will pre­vent them from be­ing pro­moted again, so they stag­nate in their in­com­pe­tence.”.

There are some caveats about the Shirly prin­ci­ple that are im­por­tant to keep in mind:

* The Shirky prin­ci­ple is just a gen­eral ob­ser­va­tion. As such, there are many sit­u­a­tions where it’s in­cor­rect. For ex­am­ple, an in­sti­tu­tion may suc­cess­fully solve the prob­lem to which they are the so­lu­tion be­cause there’s greater profit to be made that way than by pro­long­ing the prob­lem.

* The Shirky prin­ci­ple can in­volve var­i­ous types of en­ti­ties. Though the best-known for­mu­la­tion of the Shirky prin­ci­ple refers to institutions”, this prin­ci­ple can ap­ply to var­i­ous types of en­ti­ties, in­clud­ing in­di­vid­u­als and small so­cial groups. This is noted in the gen­eral for­mu­la­tion of the prin­ci­ple (“every en­tity tends to pro­long the prob­lem it is solv­ing”).

* The Shirky prin­ci­ple can in­volve var­i­ous causes. For ex­am­ple, one com­pany may pro­long a prob­lem un­in­ten­tion­ally, due to pas­siv­ity or in­er­tia, whereas an­other com­pany may pro­long a prob­lem in­ten­tion­ally, due to greed or self-preser­va­tion. This is re­flected in the gen­eral for­mu­la­tion of this prin­ci­ple, which does­n’t make any claims re­gard­ing the causes or in­ten­tion­al­ity of this phe­nom­e­non.

* The Shirky prin­ci­ple can in­volve var­i­ous pat­terns of be­hav­ior. For ex­am­ple, one com­pany may pro­long an ex­ist­ing prob­lem by not ded­i­cat­ing re­sources to de­vel­op­ing new so­lu­tions, whereas an­other com­pany may ac­tively pre­vent oth­ers from de­vel­op­ing such so­lu­tions.

In ad­di­tion, the be­hav­iors as­so­ci­ated with the Shirly prin­ci­ple can vary in other ways. For ex­am­ple:

* An en­tity may not just pre­serve an ex­ist­ing prob­lem, but also ex­ac­er­bate it.

* An en­tity may cre­ate a prob­lem that did not pre­vi­ously ex­ist, if they can be the so­lu­tion to it.

* An en­tity may per­pet­u­ate a prob­lem that it ben­e­fits from, even if the en­tity is not ac­tu­ally a so­lu­tion to the prob­lem, though the en­tity may pre­tend that it is.

Based on this, a broader ver­sion of Shirky’s prin­ci­ple can be ex­pressed as:

Entities of­ten pro­mote prob­lems that they ben­e­fit from”.

Accounting for the Shirky prin­ci­ple can be ben­e­fi­cial when it comes to sev­eral things:

* Understanding past and cur­rent be­hav­ior. For ex­am­ple, it can help you un­der­stand why cer­tain in­sti­tu­tions are seem­ingly so bad at solv­ing cer­tain prob­lems, de­spite all the re­sources—like time, ef­fort, and money—that they ded­i­cate to those prob­lems.

* Predicting fu­ture be­hav­ior. For ex­am­ple, it can help you pre­dict that an ex­ec­u­tive will keep per­pet­u­at­ing a cer­tain prob­lem, in or­der to im­prove their own sta­tus within a com­pany, even though this leads to worse out­comes for the com­pany it­self.

* Modifying be­hav­ior. For ex­am­ple, if this makes you aware of some­one’s in­cen­tive to pro­long a prob­lem, that could lead you to ei­ther elim­i­nate the per­verse in­cen­tive or cre­ate a stronger dis­in­cen­tive. Similarly, this could lead you to point out the is­sue to the en­tity in ques­tion, in or­der to en­cour­age them to try and change their be­hav­ior them­selves if do­ing so can ben­e­fit them in the long term.

When de­cid­ing how and whether to use your un­der­stand­ing of the Shirky prin­ci­ple in prac­tice, it can help to as­sess rel­e­vant fac­tors per­tain­ing to your sit­u­a­tion, such as what’s caus­ing some­one to act in ac­cor­dance with this prin­ci­ple, and what out­comes their be­hav­ior leads to. For ex­am­ple, you will likely re­spond dif­fer­ently to a gov­ern­ment agency that’s per­pet­u­at­ing a prob­lem due to in­ef­fi­cient bu­reau­cracy, than to a pri­vate com­pany that’s per­pet­u­at­ing a prob­lem out of greed, or to an in­di­vid­ual who’s act­ing out of des­per­ate self-preser­va­tion.

Finally, there are also two use­ful con­cepts worth keep­ing in mind when ac­count­ing for Shirky’s prin­ci­ple:

* Cui bono, which is a Latin phrase that means who ben­e­fits?”, and which is used to sug­gest that there’s a high prob­a­bil­ity that those re­spon­si­ble for a cer­tain event are the ones who stand to gain from it.

* Hanlon’s ra­zor, which is the adage that you should never at­tribute to mal­ice that which is ad­e­quately ex­plained by stu­pid­ity”, and which, when ap­plied broadly, sug­gests that when as­sess­ing peo­ple’s ac­tions, you should not as­sume that they acted out of a de­sire to cause harm, as long as there is a rea­son­able al­ter­na­tive ex­pla­na­tion.

Parkinson’s law is the adage that work ex­pands so as to fill the time which is avail­able for its com­ple­tion” (or more gen­er­ally, that work ex­pands to con­sume the re­sources avail­able for its com­ple­tion”). It re­lates to Shirky’s prin­ci­ple, since both con­cepts pre­sent a com­mon way in which en­ti­ties are in­ef­fi­cient or in­ef­fec­tive in deal­ing with prob­lems that they’re sup­posed to solve.

Shirky’s prin­ci­ple also re­lates to an­other phe­nom­e­non that was iden­ti­fied by Parkinson, whereby the growth of a bu­reau­cratic or ad­min­is­tra­tive body is of­ten as­so­ci­ated with a sub­stan­tial de­crease in its over­all ef­fi­ciency. This is at­trib­uted to the de­sire of of­fi­cials to in­crease the num­ber of their sub­or­di­nates, and to of­fi­cials’ ten­dency to cre­ate work for each other.

In ad­di­tion, a sim­i­lar fa­mous con­cept that’s re­lated to Shirky’s prin­ci­ple has been ex­pressed by nov­el­ist and so­cial re­former Upton Sinclair, who said that It is dif­fi­cult to get a man to un­der­stand some­thing when his salary de­pends on his not un­der­stand­ing it.”

* The Shirky prin­ci­ple is the adage that institutions will try to pre­serve the prob­lem to which they are the so­lu­tion”.

* For ex­am­ple, the Shirky prin­ci­ple means that a gov­ern­ment agency that’s meant to ad­dress a cer­tain so­ci­etal is­sue may hin­der at­tempts by oth­ers to ad­dress the is­sue, in or­der to en­sure that the agency re­mains rel­e­vant.

* This prin­ci­ple can be ex­pressed more broadly as every en­tity tends to pro­long the prob­lem it is solv­ing”, since it can in­volve en­ti­ties other than in­sti­tu­tions (e.g., in­di­vid­u­als), and var­i­ous pat­terns of be­hav­ior (e.g., un­in­ten­tion­ally fo­cus­ing on an out­dated so­lu­tion vs. in­ten­tion­ally in­ter­fer­ing with com­pe­ti­tion).

* This prin­ci­ple can also be ex­tended to say that entities of­ten pro­mote prob­lems that they ben­e­fit from”, since en­ti­ties can also cre­ate new prob­lems, ex­ac­er­bate ex­ist­ing ones, and per­pet­u­ate prob­lems that they don’t ac­tu­ally solve.

* Accounting for this prin­ci­ple can help un­der­stand past and cur­rent be­hav­ior, pre­dict fu­ture be­hav­ior, and mod­ify prob­lem­atic be­hav­iors (e.g., by re­mov­ing per­verse in­cen­tives).

...

Read the original on effectiviology.com »

4 1,341 shares, 50 trendiness

Keep your phone number private with Signal usernames

Signal’s mis­sion and sole fo­cus is pri­vate com­mu­ni­ca­tion. For years, Signal has kept your mes­sages pri­vate, your pro­file in­for­ma­tion (like your name and pro­file photo) pri­vate, your con­tacts pri­vate, and your groups pri­vate — among much else. Now we’re tak­ing that one step fur­ther, by mak­ing your phone num­ber on Signal more pri­vate.

New de­fault: Your phone num­ber will no longer be vis­i­ble to every­one in SignalIf you use Signal, your phone num­ber will no longer be vis­i­ble to every­one you chat with by de­fault. People who have your num­ber saved in their phone’s con­tacts will still see your phone num­ber since they al­ready know it. Connect with­out shar­ing your phone num­berIf you don’t want to hand out your phone num­ber to chat with some­one on Signal, you can now cre­ate a unique user­name that you can use in­stead (you will still need a phone num­ber to sign up for Signal). Note that a user­name is not the pro­file name that’s dis­played in chats, it’s not a per­ma­nent han­dle, and not vis­i­ble to the peo­ple you are chat­ting with in Signal. A user­name is sim­ply a way to ini­ti­ate con­tact on Signal with­out shar­ing your phone num­ber.Con­trol who can find you on Signal by phone num­berIf you don’t want peo­ple to be able to find you by search­ing for your phone num­ber on Signal, you can now en­able a new, op­tional pri­vacy set­ting. This means that un­less peo­ple have your ex­act unique user­name, they won’t be able to start a con­ver­sa­tion, or even know that you have a Signal ac­count — even if they have your phone num­ber.

Right now, these op­tions are in beta, and will be rolling out to every­one in the com­ing weeks.

Note that even once these fea­tures reach every­one, both you and the peo­ple you are chat­ting with on Signal will need to be us­ing the most up­dated ver­sion of the app to take ad­van­tage of them.

Importantly, all of this is op­tional. While we changed the de­fault to hide your phone num­ber from peo­ple who don’t have it saved in their phone’s con­tacts, you can change this set­ting. You are not re­quired to cre­ate a user­name and you have full con­trol over whether you want peo­ple to be able to find you by your phone num­ber or not. Whatever choices work for you and your friends, you’ll still be able to com­mu­ni­cate with your con­nec­tions in Signal, past and pre­sent.

Once these fea­tures roll out, your phone num­ber will no longer be vis­i­ble in Signal to any­one run­ning the lat­est ver­sion of Signal who does­n’t al­ready have it saved in their phone’s con­tacts. This means that when you par­tic­i­pate in group chats, mes­sage peo­ple 1-1, and make Signal calls, your phone num­ber won’t show up un­less the per­son has it saved (you can also limit this fur­ther, as de­tailed be­low). Your Signal pro­file name and photo will con­tinue to be vis­i­ble.

If you’d still like every­one to see your phone num­ber when mes­sag­ing them, you can change the de­fault by go­ing to Settings > Privacy > Phone Number > Who can see my num­ber. You can ei­ther choose to have your phone num­ber vis­i­ble to Everyone” you mes­sage on Signal or Nobody.” If you se­lect Nobody,” the only peo­ple who will see your phone num­ber in Signal are peo­ple who al­ready have it saved to their phone’s con­tacts. Changing your phone num­ber pri­vacy set­tings by go­ing to Settings > Privacy > Phone num­ber will change whether they see your phone num­ber in your pro­file.

We’re also in­tro­duc­ing a set­ting that lets you con­trol who can find you by your phone num­ber on Signal. Up un­til to­day, any­one who had your phone num­ber–from a party flier, a busi­ness card, or some­where else–could look you up on Signal by phone num­ber and mes­sage you. You can now re­strict this by go­ing to Settings > Privacy > Phone Number > Who can find me by my num­ber and set­ting it to Nobody.”

Selecting Everybody” means that any­one who has your phone num­ber can type it into Signal and send you a mes­sage re­quest (which you can ac­cept, re­ject, or block). This is still the de­fault set­ting, and is how Signal has worked for years. Changing your phone num­ber pri­vacy set­tings by go­ing to Settings > Privacy > Phone num­ber will change how peo­ple can con­nect with you on Signal.

Selecting Nobody” means that if some­one en­ters your phone num­ber on Signal, they will not be able to mes­sage or call you, or even see that you’re on Signal. And any­one you’re chat­ting with on Signal will not see your phone num­ber as part of your Profile Details page — this is true even if your num­ber is saved in their phone’s con­tacts. Keep in mind that se­lect­ing Nobody” can make it harder for peo­ple to find you on Signal. If your friend down­loads Signal and opens the app to see who they can mes­sage, they won’t know that they can mes­sage you. Instead, in or­der to con­nect on Signal you will need to share your full, unique user­name with them.

You can change these set­tings at any time to best suit the ways you want to con­nect with oth­ers on Signal.

Until now, some­one needed to know your phone num­ber to reach you on Signal. Now, you can con­nect on Signal with­out need­ing to hand out your phone num­ber. (You will still need a phone num­ber to reg­is­ter for Signal.) This is where user­names come in.

Instead of giv­ing out your phone num­ber, you can now share a user­name. You can also gen­er­ate a QR code or link that di­rects peo­ple to your user­name, let­ting them quickly con­nect with you on Signal. Generate a QR code or unique URL to con­nect on Signal with­out shar­ing your phone num­ber by go­ing to Profile > QR Code or Link.

Usernames in Signal do not func­tion like user­names on so­cial me­dia plat­forms. Signal user­names are not lo­gins or han­dles that you’ll be known by on the app — they’re sim­ply a quick way to con­nect with­out shar­ing a phone num­ber. Your pro­file name re­mains what­ever you set it to. Your user­name is not dis­played on your Profile Details page, and peo­ple you mes­sage can’t see or find your user­name with­out your shar­ing it. Put an­other way, some­one will need to know your ex­act unique user­name in or­der to start a chat with you on Signal. And Signal does not pro­vide a search­able di­rec­tory of user­names. To con­nect via user­name, type some­one’s ex­act user­name into the New Chat bar and send them a mes­sage. Once they ac­cept your mes­sage re­quest, you’ll see their pro­file name in the chat.

We have also worked to en­sure that keep­ing your phone num­ber pri­vate from the peo­ple you speak with does­n’t ne­ces­si­tate giv­ing more per­sonal in­for­ma­tion to Signal. Your user­name is not stored in plain­text, mean­ing that Signal can­not eas­ily see or pro­duce the user­names of given ac­counts.

Usernames sim­ply al­low you to ini­ti­ate a con­nec­tion on Signal with­out shar­ing your phone num­ber, and Signal’s ro­bust pri­vacy safe­guards re­main un­changed. Signal is built so that we do not know who you mes­sage, what you say, which group chats you par­tic­i­pate in, who’s in your con­tact list, and more.

If you want to cre­ate a user­name, you can do so in Settings > Profile. A user­name on Signal (unlike a pro­file name) must be unique and must have two or more num­bers at the end of it; a choice in­tended to help keep user­names egal­i­tar­ian and min­i­mize spoof­ing. Usernames can be changed as of­ten as you like, and you can delete your user­name en­tirely if you pre­fer to no longer have one. To cre­ate a user­name, go to Settings > Profile.

Once you’ve cre­ated a user­name, you can share it with oth­ers who can use it to con­nect with you. To con­nect with some­one via their user­name, sim­ply open the New Chat screen on Signal and type in their user­name.

Since Signal does not pro­vide a search­able di­rec­tory of user­names, only peo­ple who have your ex­act unique user­name will be able to start a con­ver­sa­tion with you. And you can share it with as few or as many peo­ple as you want.

You can also share a QR code or unique URL that short­cuts to your user­name in Signal. You can re­set these at any time with­out hav­ing to change your user­name, much like a group in­vite link.

Usernames in Signal are de­signed to be eas­ily change­able. For ex­am­ple, you can make a user­name to con­nect with peo­ple at a con­fer­ence or to plan a group trip. Then, when it’s over, change it if you want to. Just click on your user­name from your Profile Details page to make the changes you want. When you change your user­name, your Signal con­tacts are not no­ti­fied be­cause your user­name is not vis­i­ble to the peo­ple you are chat­ting with 1-1 or in groups.

Starting soon, your phone num­ber will no longer be vis­i­ble to peo­ple you chat with on Signal, un­less they have it in their phone’s con­tacts. You will also be able to con­fig­ure a new pri­vacy set­ting to limit who can find you by your phone num­ber on Signal. And, you’ll now be able to cre­ate an op­tional user­name that you can share with the peo­ple you want to con­nect with on Signal.

For more in­for­ma­tion, you can take a look at our sup­port cen­ter ar­ti­cles.

Currently these new fea­tures are in beta but will be rolling out to every­one in a few weeks.

Thanks to Nina Berman, Jun Harada, Ehren Kret, Joshua Lund, Jim O’Leary, Alex Ristevski, and Meredith Whittaker for help­ing to au­thor and edit this post. Don’t have Signal? Give it a try!

...

Read the original on signal.org »

5 1,095 shares, 38 trendiness

The killer app of Gemini Pro 1.5 is video

Subscribe

The killer app of Gemini Pro 1.5 is video

Last week Google in­tro­duced Gemini Pro 1.5, an enor­mous up­grade to their Gemini se­ries of AI mod­els.

Gemini Pro 1.5 has a 1,000,000 to­ken con­text size. This is huge—pre­vi­ously that record was held by Claude 2.1 (200,000 to­kens) and gpt-4-turbo (128,000 to­kens)—though the dif­fer­ence in to­k­enizer im­ple­men­ta­tions be­tween the mod­els means this is­n’t a per­fectly di­rect com­par­i­son.

I’ve been play­ing with Gemini Pro 1.5 for a few days, and I think the most ex­cit­ing fea­ture is­n’t so much the to­ken count… it’s the abil­ity to use video as an in­put.

I’ve been ac­cess­ing the model through the Google AI Studio in­ter­face—I don’t have API ac­cess yet.

I tried fill­ing up the con­text with con­tent from my blog, and got some good re­sults. But things got re­ally ex­cit­ing when I be­gan ex­per­i­ment­ing with video up­loads (after some false starts—the team have been ac­tively fix­ing bugs, with vari­able bi­trate sup­port land­ing just this morn­ing).

I took this seven sec­ond video of one of my book­shelves:

Then I up­loaded it with the fol­low­ing prompt:

JSON ar­ray of books in this video

That 7 sec­ond video con­sumed just 1,841 to­kens out of my 1,048,576 to­ken limit.

Gemini Pro 1.5 did­n’t re­turn JSON, but it did re­ply with this:

It is not pos­si­ble to fully iden­tify all the books in the video. However, here are some of the books that can be seen:

The Lord of the Rings by J. R.R. Tolkien

Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman

The Hacker Ethic and the Spirit of the Information Age by Pekka Himanen

The Art of Agile Development by James Shore and Shane Warden

Growing Up with Lucy by April Henry

Getting Things Done by David Allen

Information Architecture for the World Wide Web by Louis Rosenfeld and Peter Morville

Don’t Make Me Think: A Common Sense Approach to Web Usability by Steve Krug

Party Animals: My Family and Other Communists by David Aaronovitch

as a JSON ar­ray of ob­jects, with ti­tle and au­thor keys

Gave me that back as JSON

title”: The Lord of the Rings”,

author”: J.R.R. Tolkien”

title”: Structure and Interpretation of Computer Programs”,

author”: Harold Abelson and Gerald Jay Sussman”

title”: Rework”,

author”: Jason Fried and David Heinemeier Hansson”

title”: The Hacker Ethic and the Spirit of the Information Age”,

author”: Pekka Himanen”

title”: The Google Story”,

author”: David A. Vise”

title”: The Art of Agile Development”,

author”: James Shore and Shane Warden”

title”: Creative Code”,

author”: John Maeda”

title”: Web Operations”,

author”: John Allspaw and Jesse Robbins”

title”: Python Framework”,

author”: David Mertz”

title”: The Personal MBA,

author”: Josh Kaufman”

title”: Growing Up with Lucy”,

author”: April Henry”

title”: Getting Things Done”,

author”: David Allen”

title”: Site-Seeing: A Visual Approach to Web Usability”,

author”: Luke Wroblewski”

title”: JavaScript: The Definitive Guide”,

author”: David Flanagan”

title”: Building Accessible Websites”,

author”: Joe Clark”

title”: Information Architecture for the World Wide Web”,

author”: Louis Rosenfeld and Peter Morville”

title”: Don’t Make Me Think: A Common Sense Approach to Web Usability”,

author”: Steve Krug”

title”: Programming Erlang: Software for a Concurrent World”,

author”: Joe Armstrong”

title”: XSLT 2.0 Web Development”,

author”: Dmitry Kirsanov”

title”: Pragmatic Version Control Using Git”,

author”: Travis Swicegood”

title”: Party Animals: My Family and Other Communists”,

author”: David Aaronovitch”

So how well did it do?

Honestly, I’m pretty as­ton­ished by this. The video is 7s long, it pans quite fast (there is some mo­tion blur in the video) and some of the books are par­tially ob­scured by items.

It did­n’t get all of them, but it did about as good a job as I could have done given the same video.

I’m par­tic­u­larly im­pressed that it got Site Seeing: A Visual Approach to Web Usability” by Luke Wroblewski, where the book it­self is al­most en­tirely ob­scured by a squir­rel leav­ing only Site-Seeing” vis­i­ble.

I did spot one hal­lu­ci­na­tion: it lists The Personal MBA by Josh Kaufman”, but I don’t own that book—the clos­est thing in the video is a blurry few frames of a book called The Beermat Entrepreneur”.

I de­cided to try it again with a dif­fer­ent video and hope­fully a bet­ter prompt. This time I videoed a book­shelf mostly full of cook­books—this time in por­trait, a bit longer (22 sec­onds) and pan­ning down as well as across.

My new prompt was:

Output a JSON ar­ray of {“title”: …”, authors”: …”} ob­jects for books in this video

It looks like the safety fil­ter may have taken of­fense to the word Cocktail”!

I opened up the safety set­tings, di­alled them down to low” for every cat­e­gory and tried again. It ap­peared to refuse a sec­ond time.

So I chan­nelled Mrs Doyle and said:

go on give me that JSON

title”: The Cocktail Guy”,

authors”: Rich Woods”

title”: Smuggler’s Cove”,

authors”: Martin Cate”

title”: The Curious Bartender”,

authors”: Tristan Stephenson”

title”: The Art of Fermentation”,

authors”: Sandor Ellix Katz”

title”: The Noma Guide to Fermentation”,

authors”: René Redzepi & David Zilber”

title”: Cocktails #7″,

authors”: Difford’s Guide”

title”: The Curious Cookbook”,

authors”:

title”: Rival Crock Pot Cooking”,

authors”:

title”: The Art of French Cooking”,

authors”: Fernande Garvin”

title”: The Book on Pie”,

authors”: Erin Jeanne McDowell”

title”: The New Taste of Chocolate”,

authors”:

...

Read the original on simonwillison.net »

6 1,071 shares, 40 trendiness

Introducing new state-of-the-art open models

Gemma is de­signed with our AI Principles at the fore­front. As part of mak­ing Gemma pre-trained mod­els safe and re­li­able, we used au­to­mated tech­niques to fil­ter out cer­tain per­sonal in­for­ma­tion and other sen­si­tive data from train­ing sets. Additionally, we used ex­ten­sive fine-tun­ing and re­in­force­ment learn­ing from hu­man feed­back (RLHF) to align our in­struc­tion-tuned mod­els with re­spon­si­ble be­hav­iors. To un­der­stand and re­duce the risk pro­file for Gemma mod­els, we con­ducted ro­bust eval­u­a­tions in­clud­ing man­ual red-team­ing, au­to­mated ad­ver­sar­ial test­ing, and as­sess­ments of model ca­pa­bil­i­ties for dan­ger­ous ac­tiv­i­ties. These eval­u­a­tions are out­lined in our Model Card.

We’re also re­leas­ing a new Responsible Generative AI Toolkit to­gether with Gemma to help de­vel­op­ers and re­searchers pri­or­i­tize build­ing safe and re­spon­si­ble AI ap­pli­ca­tions. The toolkit in­cludes:

* Safety clas­si­fi­ca­tion: We pro­vide a novel method­ol­ogy for build­ing ro­bust safety clas­si­fiers with min­i­mal ex­am­ples.

* Guidance: You can ac­cess best prac­tices for model builders based on Google’s ex­pe­ri­ence in de­vel­op­ing and de­ploy­ing large lan­guage mod­els.

You can fine-tune Gemma mod­els on your own data to adapt to spe­cific ap­pli­ca­tion needs, such as sum­ma­riza­tion or re­trieval-aug­mented gen­er­a­tion (RAG). Gemma sup­ports a wide va­ri­ety of tools and sys­tems:

* Multi-framework tools: Bring your fa­vorite frame­work, with ref­er­ence im­ple­men­ta­tions for in­fer­ence and fine-tun­ing across multi-frame­work Keras 3.0, na­tive PyTorch, JAX, and Hugging Face Transformers.

* Cutting-edge hard­ware plat­forms: We’ve part­nered with NVIDIA to op­ti­mize Gemma for NVIDIA GPUs, from data cen­ter to the cloud to lo­cal RTX AI PCs, en­sur­ing in­dus­try-lead­ing per­for­mance and in­te­gra­tion with cut­ting-edge tech­nol­ogy.

* Optimized for Google Cloud: Vertex AI pro­vides a broad MLOps toolset with a range of tun­ing op­tions and one-click de­ploy­ment us­ing built-in in­fer­ence op­ti­miza­tions. Advanced cus­tomiza­tion is avail­able with fully-man­aged Vertex AI tools or with self-man­aged GKE, in­clud­ing de­ploy­ment to cost-ef­fi­cient in­fra­struc­ture across GPU, TPU, and CPU from ei­ther plat­form.

Gemma is built for the open com­mu­nity of de­vel­op­ers and re­searchers pow­er­ing AI in­no­va­tion. You can start work­ing with Gemma to­day us­ing free ac­cess in Kaggle, a free tier for Colab note­books, and $300 in cred­its for first-time Google Cloud users. Researchers can also ap­ply for Google Cloud cred­its of up to $500,000 to ac­cel­er­ate their pro­jects.

You can ex­plore more about Gemma and ac­cess quick­start guides on ai.google.dev/​gemma.

As we con­tinue to ex­pand the Gemma model fam­ily, we look for­ward to in­tro­duc­ing new vari­ants for di­verse ap­pli­ca­tions. Stay tuned for events and op­por­tu­ni­ties in the com­ing weeks to con­nect, learn and build with Gemma.

We’re ex­cited to see what you cre­ate!

...

Read the original on blog.google »

7 1,006 shares, 121 trendiness

Netlify just sent me a $104K bill for a simple static site

So I re­ceived an email from Netlify last week­end say­ing that I have a $104,500.00 bill over­due. At first I thought this is a joke or some scam email but af­ter check­ing my dash­board it seems like I am truly ow­ing them 104K dol­lars:

So I was like 😅😅😅 and think okay maybe I got ddos at­tacked. Since Netlify charges 55$/100GB for the ex­ceed­ing band­width, the peak day Feb 16 has 33385/55 * 100GB = 60.7TB band­width in a day. I mean, it’s not im­pos­si­ble but why at­tack a sim­ple sta­tic site like mine? This site has been on Netlify for 4 years and is al­ways okay with the free tier. The monthly band­width never ex­ceeded even 10GB, and has only ~200 daily vis­i­tors.

I con­tacted their billing sup­port and they re­sponded me that they looked into it and the band­width came from some user agents, mean­ing it is a ddos at­tack. Then they say such cases hap­pen and they usu­ally charge their cus­tomer 20% on this. And since my amount is too large, they of­fer to dis­count to 5%, which means I still need to pay 5 thou­sand dol­lars.

This feels more like a scam to me. Why do server­less plat­forms like Netlify and Vercel not have ddos pro­tec­tion, or at least a spend limit? They should have alerted me if the spend­ing sky­rock­eted. I checked my in­box and spam folder and found noth­ing. The only email is Extra us­age pack­age pur­chased for band­width”. It feels like they de­lib­er­ately not sup­port these fea­tures so that they can cash grab in sit­u­a­tions like this.

The ddos at­tack was fo­cused on a file on my site. Yes it’s partly my fault to put a 3.44MB size sound file on my site rather than us­ing a third-party plat­form like SoundCloud. But still this does­n’t in­val­i­date the point of hav­ing pro­tec­tion against such at­tacks, and limit the spend­ing.

I haven’t paid that $5k yet and de­cided to post here to hear what oth­ers think first. And yes I have mi­grated my site to Cloudflare. Learned my les­son and will never use Netlify (or even Vercel) again.

UPDATE: Thank you all for the sug­ges­tions I have posted this on HackerNews.

UPDATE: Here’s the email re­sponse I got from their billing sup­port:

I have taken down that .mp3 file but still, it’s only 3.44MB size and I don’t think it’s en­tirely my fault leav­ing it there.

...

Read the original on www.reddit.com »

8 938 shares, 31 trendiness

Stable Diffusion 3 — Stability AI

Prompt: Epic anime art­work of a wiz­ard atop a moun­tain at night cast­ing a cos­mic spell into the dark sky that says Stable Diffusion 3” made out of col­or­ful en­ergy

Announcing Stable Diffusion 3 in early pre­view, our most ca­pa­ble text-to-im­age model with greatly im­proved per­for­mance in multi-sub­ject prompts, im­age qual­ity, and spelling abil­i­ties. While the model is not yet broadly avail­able, to­day, we are open­ing the wait­list for an early pre­view. This pre­view phase, as with pre­vi­ous mod­els, is cru­cial for gath­er­ing in­sights to im­prove its per­for­mance and safety ahead of an open re­lease. You can sign up to join the wait­list here.

The Stable Diffusion 3 suite of mod­els cur­rently ranges from 800M to 8B pa­ra­me­ters. This ap­proach aims to align with our core val­ues and de­moc­ra­tize ac­cess, pro­vid­ing users with a va­ri­ety of op­tions for scal­a­bil­ity and qual­ity to best meet their cre­ative needs. Stable Diffusion 3 com­bines a dif­fu­sion trans­former ar­chi­tec­ture and flow match­ing. We will pub­lish a de­tailed tech­ni­cal re­port soon.

We be­lieve in safe, re­spon­si­ble AI prac­tices. This means we have taken and con­tinue to take rea­son­able steps to pre­vent the mis­use of Stable Diffusion 3 by bad ac­tors. Safety starts when we be­gin train­ing our model and con­tin­ues through­out the test­ing, eval­u­a­tion, and de­ploy­ment. In prepa­ra­tion for this early pre­view, we’ve in­tro­duced nu­mer­ous safe­guards. By con­tin­u­ally col­lab­o­rat­ing with re­searchers, ex­perts, and our com­mu­nity, we ex­pect to in­no­vate fur­ther with in­tegrity as we ap­proach the mod­el’s pub­lic re­lease.

Our com­mit­ment to en­sur­ing gen­er­a­tive AI is open, safe, and uni­ver­sally ac­ces­si­ble re­mains stead­fast. With Stable Diffusion 3, we strive to of­fer adapt­able so­lu­tions that en­able in­di­vid­u­als, de­vel­op­ers, and en­ter­prises to un­leash their cre­ativ­ity, align­ing with our mis­sion to ac­ti­vate hu­man­i­ty’s po­ten­tial. If you’d like to ex­plore us­ing one of our other im­age mod­els for com­mer­cial use prior to the Stable Diffusion 3 re­lease, please visit our Stability AI Membership page to self host or our Developer Platform to ac­cess our API.

To stay up­dated on our progress fol­low us on Twitter, Instagram, LinkedIn, and join our Discord Community.

Behind the Compute: Using the New AI Supercomputer

...

Read the original on stability.ai »

9 933 shares, 36 trendiness

Happy New Year: GPT in 500 lines of SQL

This year, the talk of the town was AI and how it can do every­thing for you.

I like it when some­one or some­thing does every­thing for me. To this end, I de­cided to ask ChatGPT to write my New Year’s post:

Hey ChatGPT. Can you im­ple­ment a large lan­guage model in SQL?”

No, SQL is not suit­able for im­ple­ment­ing large lan­guage mod­els. SQL is a lan­guage for man­ag­ing and query­ing data in re­la­tional data­bases. Implementing a lan­guage model re­quires com­plex al­go­rithms, large datasets, and of­ten deep learn­ing frame­works, which are be­yond the ca­pa­bil­i­ties of SQL.”

It just proves that if you want some­thing done right, you have to do it your­self.

Encouraged by this op­ti­mistic fore­cast, to­day we will im­ple­ment a large lan­guage model in SQL.

While writ­ing this post, I used the won­der­ful ar­ti­cle GPT in 60 Lines of NumPy by Jay Mody. This ar­ti­cle ex­plains the in­ner work­ings of a GPT model much bet­ter than I can hope to do. Still, a lit­tle re­cap is in or­der.

A gen­er­a­tive LLM is a func­tion. It takes a text string as in­put (called prompt” in AI par­lance), and re­turns an ar­ray of strings and num­bers. Here’s what the sig­na­ture of this func­tion looks like:

This func­tion is de­ter­min­is­tic. It does a lot of math un­der the hood, but all this math is hard­wired. If you call it re­peat­edly with the same in­put, it will al­ways re­turn the same out­put.

It may come as a sur­prise to any­one who’s been us­ing ChatGPT and sim­i­lar prod­ucts be­cause they can give dif­fer­ent an­swers to the same ques­tion. Yet, it’s true. We will shortly see how it works.

Something like this:

llm(“I wish you a happy New”)

0 (′ Year’, 0.967553)

1 (′ Years’, 0.018199688)

2 (′ year’, 0.003573329)

3 (′ York’, 0.003114716)

4 (′ New’, 0.0009022804)

50252 (′ car­bo­hyd’, 2.3950911e-15)

50253 (′ vol­un­te’, 2.2590102e-15)

50254 (‘pmwiki’, 1.369229e-15)

50255 (′ pro­port’, 1.1198108e-15)

50256 (′ cum­bers’, 7.568147e-17)

It re­turns an ar­ray of tu­ples. Each tu­ple con­sists of a word (or, rather, a string) and a num­ber. The num­ber is the prob­a­bil­ity that this word will con­tinue the prompt. The model thinks” that the phrase I wish you a happy New” will be fol­lowed by the char­ac­ter se­quence Year” with a prob­a­bil­ity of 96.7%, Years” of 1.8% and so on.

The word think” above is quoted be­cause, of course, the model does­n’t re­ally think. It me­chan­i­cally re­turns ar­rays of words and num­bers ac­cord­ing to some hard­wired in­ter­nal logic.

Large lan­guage mod­els are used in text ap­pli­ca­tions (chatbots, con­tent gen­er­a­tors, code as­sis­tants etc). These ap­pli­ca­tions re­peat­edly call the model and se­lect the word sug­gested by it (with some de­gree of ran­dom­ness). The next sug­gested word is added to the prompt and the model is called again. This con­tin­ues in a loop un­til enough words are gen­er­ated.

The ac­crued se­quence of words will look like a text in a hu­man lan­guage, com­plete with gram­mar, syn­tax and even what ap­pears to be in­tel­li­gence and rea­son­ing. In this as­pect, it is not un­like a Markov chain which works on the same prin­ci­ple.

The in­ter­nals of a large lan­guage model are wired up so that the next sug­gested word will be a nat­ural con­tin­u­a­tion of the prompt, com­plete with its gram­mar, se­man­tics and sen­ti­ment. Equipping a func­tion with such a logic be­came pos­si­ble through a se­ries of sci­en­tific break­throughs (and pro­gram­ming drudgery) that have re­sulted in the de­vel­op­ment of the fam­ily of al­go­rithms known as GPT, or Generative Pre-trained Transformer.

Generative” means that it gen­er­ates text (by adding con­tin­u­a­tions to the prompt re­cur­sively, as we saw ear­lier).

Transformer” means that it uses a par­tic­u­lar type of neural net­work, first de­vel­oped by Google and de­scribed in this pa­per.

Pre-trained” is a lit­tle bit his­tor­i­cal. Initially, the abil­ity for the model to con­tinue text was thought of as just a pre­req­ui­site for a more spe­cial­ized task: in­fer­ence (finding log­i­cal con­nec­tions be­tween phrases), clas­si­fi­ca­tion (for in­stance, guess­ing the num­ber of stars in a ho­tel rat­ing from the text of the re­view), ma­chine trans­la­tion and so on. It was thought that these two parts should have been trained sep­a­rately, the lan­guage part be­ing just a pre-train­ing for a real” task that would fol­low.

As the orig­i­nal GPT pa­per puts it:

We demon­strate that large gains on these tasks can be re­al­ized by gen­er­a­tive pre-train­ing of a lan­guage model on a di­verse cor­pus of un­la­beled text, fol­lowed by dis­crim­i­na­tive fine-tun­ing on each spe­cific task.

It was not un­til later that peo­ple re­al­ized that, with a model large enough, the sec­ond step was of­ten not nec­es­sary. A Transformer model, trained to do noth­ing else than gen­er­ate texts, turned out to be able to fol­low hu­man lan­guage in­struc­tions that were con­tained in these texts, with no ad­di­tional train­ing (“fine-tuning” in AI par­lance) re­quired.

With that out of the way, let’s fo­cus on the im­ple­men­ta­tion.

Here is what hap­pens when we try to gen­er­ate text from the prompt us­ing GPT2:

def gen­er­ate(prompt: str) -> str:

# Transforms a string into a list of to­kens.

to­kens = to­k­enize(prompt) # to­k­enize(prompt: str) -> list[int]

while True:

# Runs the al­go­rithm.

# Returns to­kens’ prob­a­bil­i­ties: a list of 50257 floats, adding up to 1.

can­di­dates = gpt2(to­kens) # gpt2(to­kens: list[int]) -> list[float]

# Selects the next to­ken from the list of can­di­dates

nex­t_­to­ken = se­lec­t_nex­t_­to­ken(can­di­dates)

# se­lec­t_nex­t_­to­ken(can­di­dates: list[float]) -> int

# Append it to the list of to­kens

to­kens.ap­pend(nex­t_­to­ken)

# Decide if we want to stop gen­er­at­ing.

# It can be to­ken counter, time­out, stop­word or some­thing else.

if should_stop_­gen­er­at­ing():

break

# Transform the list of to­kens into a string

com­ple­tion = deto­k­enize(to­kens) # deto­k­enize(to­kens: list[int]) -> str

re­turn com­ple­tion

Let’s im­ple­ment all these pieces one by one in SQL.

Before a text can be fed to a neural net­work, it needs to be con­verted into a list of num­bers. Of course, that’s barely news: that’s what text en­cod­ings like Unicode do. Plain Unicode, how­ever, does­n’t re­ally work well with neural net­works.

Neural net­works, at their core, do a lot of ma­trix mul­ti­pli­ca­tions and cap­ture what­ever pre­dic­tive pow­ers they have in the co­ef­fi­cients of these ma­trixes. Some of these ma­trixes have one row per every pos­si­ble value in the alphabet”; oth­ers have one row per character”.

Here, the words alphabet” and character” don’t have the usual mean­ing. In Unicode, the alphabet” is 149186 char­ac­ters long (this is how many dif­fer­ent Unicode points there are at the time of this writ­ing), and a character” can be some­thing like this: ﷽ (yes, that’s a sin­gle Unicode point num­ber 65021, en­cod­ing a whole phrase in Arabic that is par­tic­u­larly im­por­tant for the Muslims). Note that the very same phrase could have been writ­ten in usual Arabic let­ters. It means that the same text can have many en­cod­ings.

As an il­lus­tra­tion, let’s take the word PostgreSQL”. If we were to en­code it (convert to an ar­ray of num­bers) us­ing Unicode, we would get 10 num­bers that could po­ten­tially be from 1 to 149186. It means that our neural net­work would need to store a ma­trix with 149186 rows in it and per­form a num­ber of cal­cu­la­tions on 10 rows from this ma­trix. Some of these rows (corresponding to the let­ters of the English al­pha­bet) would be used a lot and pack a lot of in­for­ma­tion; oth­ers, like poop emoji and ob­scure sym­bols from dead lan­guages, would hardly be used at all, but still take up space.

Naturally, we want to keep both these num­bers, the alphabet” length and the character” count, as low as pos­si­ble. Ideally, all the characters” in our al­pha­bet should be dis­trib­uted uni­formly, and we still want our en­cod­ing to be as pow­er­ful as Unicode.

The way we can do that, in­tu­itively, is to as­sign unique num­bers to se­quences of words that oc­cur of­ten in the texts we work with. In Unicode, the same re­li­gious phrase in Arabic can be en­coded us­ing ei­ther a sin­gle code point, or let­ter by let­ter. Since we are rolling our own en­cod­ing, we can do the same for the words and phrases that are im­por­tant for the model (i.e. show up of­ten in texts).

For in­stance, we could have sep­a­rate num­bers for Post”, greSQL” and ing”. This way, the words PostgreSQL” and Posting” would both have a length of 2 in our rep­re­sen­ta­tion. And of course, we would still main­tain sep­a­rate code points for shorter se­quences and in­di­vid­ual bytes. Even if we come across gib­ber­ish or a text in a for­eign lan­guage, it would still be en­cod­able, al­beit longer.

GPT2 uses a vari­a­tion of the al­go­rithm called Byte pair en­cod­ing to do pre­cisely that. Its to­k­enizer uses a dic­tio­nary of 50257 code points (in AI par­lance, tokens”) that cor­re­spond to dif­fer­ent byte se­quences in UTF-8 (plus the end of text” as a sep­a­rate to­ken).

This dic­tio­nary was built by sta­tis­ti­cal analy­sis per­formed like this:

Start with a sim­ple en­cod­ing of 256 to­kens: one to­ken per byte.

Take a large cor­pus of texts (preferably the one the model will be trained on).

Calculate which pair of to­kens is the most fre­quent. Let’s as­sume it’s 0x20 0x74 (space fol­lowed by the low­er­case t”).

Assign the next avail­able value (257) to this pair of bytes.

Repeat the steps 3-5, now pay­ing at­ten­tion to the byte se­quences. If a se­quence of bytes can be en­coded with a com­plex to­ken, use the com­plex to­ken. If there are am­bi­gu­i­ties (say, abc” can, at some point, be en­coded as a” + bc” or ab” + c”), use the one with the low­est num­ber (because it was added ear­lier and hence is more fre­quent). Do this re­cur­sively un­til all se­quences that can col­lapse into a sin­gle to­ken will col­lapse into a sin­gle to­ken.

The num­ber 50000 was cho­sen more or less ar­bi­trar­ily by the de­vel­op­ers. Other mod­els keep the num­ber of to­kens in a sim­i­lar range (from 30k to 100k).

At every it­er­a­tion of this al­go­rithm, a new to­ken that is a con­cate­na­tion of two pre­vi­ous ones will be added to the dic­tio­nary. Ultimately, we will end up with 50256 to­kens. Add a fixed-num­ber to­ken for end-of-text”, and we’re done.

The GPT2 ver­sion of BTE has an­other layer of en­cod­ing: the to­ken dic­tio­nary maps to­kens to strings and not ar­rays of bytes. Mapping from bytes to string char­ac­ters is de­fined in this func­tion. We will save the dic­tio­nary it pro­duces in the table en­coder.

Let’s see how we can im­ple­ment the to­k­enizer in SQL.

The to­k­enizer is an in­te­gral part of GPT2, and the to­ken dic­tio­nary can be down­loaded from OpenAI’s web­site along with the rest of the model. We will need to im­port it into the table to­k­enizer. At the bot­tom of this post, you will find a link to the code repos­i­tory. Its code will au­to­mate pop­u­lat­ing data­base ta­bles needed for the model.

In a re­cur­sive CTE, we will split this word into to­kens (starting with sin­gle bytes) and merge the best ad­ja­cent pairs, un­til there is noth­ing left to merge. The merg­ing it­self hap­pens in a nested re­cur­sive CTE.

For the demo, I will use the word Mississippilessly”. Each record in the re­sult­set shows the best pair to col­lapse found so far, and also the progress through the query.

WITH RECURSIVE

bpe AS

SELECT (n + 1)::BIGINT AS po­si­tion, char­ac­ter, TRUE AS con­tinue, 1 AS step,

NULL::INT AS to­ken, NULL::TEXT AS com­bined

FROM CONVERT_TO(‘Mississippilessly’, UTF-8’) AS bytes

CROSS JOIN LATERAL

GENERATE_SERIES(0, LENGTH(bytes) - 1) AS n

JOIN en­coder

ON byte = GET_BYTE(bytes, n)

UNION ALL

WITH RECURSIVE

base AS

SELECT *

FROM bpe

WHERE con­tinue

bn AS

SELECT ROW_NUMBER() OVER (ORDER BY po­si­tion) AS po­si­tion,

con­tinue,

char­ac­ter,

char­ac­ter || LEAD(character) OVER (ORDER BY po­si­tion) AS clus­ter

FROM base

top_rank AS

SELECT to­k­enizer.*

FROM bn

...

Read the original on explainextended.com »

10 892 shares, 34 trendiness

...

Read the original on openrss.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.