Features

Eugenia Fuenmayor

Scientific Director of Eurecat’s Digital Area

“Ethics cannot remain on the sidelines of technological evolution”

Discrimination on social media caused by computational algorithms is becoming a problem with the integration of artificial intelligence into systems, a situation that could get worse with the development of generative AI

“The concern about integrating artificial intelligence is the privacy of the data that is given to the algorithms”

Com­pu­ta­tional al­go­rithms are a se­ries of sys­tem­atic and pre-de­fined in­struc­tions that are used in mul­ti­ple tasks. One of these tasks is to op­ti­mise the re­sults of in­ter­net queries, a func­tion car­ried out by search and sort­ing al­go­rithms. How­ever, these al­go­rithms re­flect the val­ues of who­ever codes them, who­ever de­vel­ops the in­struc­tions, and this is where bi­ases that dis­crim­i­nate by gen­der, race or lan­guage come about.

What’s the pur­pose of the re­search on bi­ases on so­cial media car­ried out by Eu­re­cat’s dig­i­tal area?
The ul­ti­mate goal is to achieve an in­ter­net with qual­ity in­for­ma­tion re­gard­ing big data. We work to iden­tify hate speech and fake news, and pro­pose so­lu­tions to make the in­ter­net as trans­par­ent as pos­si­ble. We carry out con­tin­u­ous tech­no­log­i­cal mon­i­tor­ing so that the in­dus­try can apply in­no­va­tions at all times. We are a bridge be­tween uni­ver­si­ties and com­pa­nies.
Do you also analyse the use of ar­ti­fi­cial in­tel­li­gence?
We work for fair and trans­par­ent ar­ti­fi­cial in­tel­li­gence. We iden­tify bi­ases in train­ing al­go­rithms, the risk to pri­vacy in data, and apply so­lu­tions that min­imise or elim­i­nate them.
What re­search do you carry out?
We work on ap­plied re­search pro­jects, in pub­lic con­sor­tia and in oth­ers that are fi­nanced in­ter­nally.
Are bi­ases rife on so­cial media?
We tend to find cases of dis­crim­i­na­tion in many dif­fer­ent areas. For ex­am­ple, in the ar­ti­fi­cial in­tel­li­gence sys­tems of big com­pa­nies like Google or Ama­zon. I re­mem­ber a job pro­posal pre­sented by Google on one oc­ca­sion, in which men were pri­ori­tised over women. Ama­zon has done the same where tech­no­log­i­cal jobs are con­cerned. Apart from job of­fers, we also have a lot of every­day ex­am­ples of dis­crim­i­na­tory po­si­tions that go un­no­ticed by users.
Like what?
Well, chat­bots like Alexa or Siri. Both use women’s voices, and this per­pet­u­ates the ser­vice role that has been given to the fe­male gen­der through­out his­tory. This is such a so­cially ac­cepted stereo­type that we find it ab­solutely nor­mal.
What major bi­ases have you de­tected?
The most nu­mer­ous cases of dis­crim­i­na­tion are re­lated to gen­der, al­though it’s also true that it’s the bias we look for the most when test­ing and it is also the focus of sev­eral of our pro­jects. But in some cases, gen­der dis­crim­i­na­tion can be ac­com­pa­nied by other types of dis­crim­i­na­tion. I re­mem­ber a case where the dis­crim­i­na­tion in­creased if you com­bined the fe­male gen­der and race. It was a bio­met­ric sur­veil­lance sys­tem de­signed to iden­tify the most sus­pi­cious sub­jects, which in this case were black women.
What other bi­ases have you found?
We’ve worked with Wikipedia in de­tect­ing cul­tural gaps, that is, ge­o­graph­i­cal areas with less cov­er­age than oth­ers, and we pro­vided tools to re­duce them with the cre­ation of the Wikipedia Di­ver­sity Ob­ser­va­tory.
In­cor­po­rat­ing AI into com­pu­ta­tional learn­ing al­go­rithms al­lows for cre­at­ing and at the same time lo­cat­ing bi­ases, but what will hap­pen with gen­er­a­tive AI? Will it get more com­pli­cated?
The great con­cern about in­te­grat­ing ar­ti­fi­cial in­tel­li­gence is the pri­vacy of the data that is given to the al­go­rithms. And this is an ex­tremely im­por­tant issue be­cause sys­tems re­ceive more and more data to learn from. Big data is used to train mod­els, and these mod­els must be trans­par­ent and cor­rect in order to deal with bi­ases.
What needs to be taken into ac­count?
The new lan­guage mod­els are very con­vinc­ing but peo­ple are not aware that they may con­tain er­rors or bi­ases, with their wide­spread use spread­ing this dis­crim­i­na­tion in so­ci­ety as a whole. At the mo­ment, the Eu­ro­pean Union is try­ing to reg­u­late this and has ap­proved the start of talks that will lead to the first law in the world that reg­u­lates the use of ar­ti­fi­cial in­tel­li­gence, which it is ex­pected to be ready by the end of the year. The pri­or­ity of this reg­u­la­tion is to en­sure that ar­ti­fi­cial in­tel­li­gence sys­tems used in the EU are safe, trans­par­ent, trace­able, non-dis­crim­i­na­tory and en­vi­ron­men­tally friendly. In ad­di­tion, it also says it wants to man­date that these sys­tems be mon­i­tored by ac­tual peo­ple.
Does the new law say any­thing about who must be in­volved in the de­sign of the al­go­rithms?
What we know for now is that the Eu­ro­pean reg­u­la­tions in­di­cate that de­vel­op­ers will have a lot of re­spon­si­bil­ity, much more than com­pa­nies, in the de­vel­op­ment of ar­ti­fi­cial in­tel­li­gence. How­ever, the prob­lem re­mains the same as we have now, which is that in the de­sign and de­vel­op­ment of ar­ti­fi­cial in­tel­li­gence one woman is in­volved for every five men. In­creas­ing di­ver­sity is now more nec­es­sary than ever.
Would set­ting quo­tas in the de­sign and de­vel­op­ment stages of ar­ti­fi­cial in­tel­li­gence work?
Es­tab­lish­ing quo­tas that pos­i­tively dis­crim­i­nate in favour of women in this field is very dif­fi­cult, be­cause new tech­nolo­gies are con­stantly emerg­ing, and women are in the mi­nor­ity in STEM ca­reers. The prob­lem must be tack­led ear­lier, by the school and the fam­ily. The sta­tis­tics tell us that there is as much tal­ent among girls as among boys, and in Spain more women enrol in uni­ver­sity than men, but only 30% of women choose STEM stud­ies and, within this area, the per­cent­age drops where AI and com­put­ing is con­cerned. We must break this cycle and cre­ate fe­male role mod­els for girls now start­ing school, to ex­plain that the first peo­ple to ever write an al­go­rithm and a com­piler for pro­gram­ming were two women, al­though their names never ap­pear. Added to this is the fact that the vast ma­jor­ity of uni­ver­sity pro­fes­sors in STEM de­gree sub­jects are men.
There are grow­ing de­mands for ethics to be in­cluded in tech­no­log­i­cal de­vel­op­ment, es­pe­cially among those op­posed to the ex­ces­sive growth of AI.
Tech­no­log­i­cal evo­lu­tion can­not be stopped, and ethics can­not be left to one side. Ar­ti­fi­cial in­tel­li­gence will change a lot of things, for bet­ter and for worse, but it doesn’t have rea­son­ing be­hind it, and luck­ily, it still has a long way to go for it to be­come an­a­lyt­i­cal. But one more step will be re­flec­tive ar­ti­fi­cial in­tel­li­gence, which will in­deed be able to take con­trol of sys­tems. So our goal must be to build the best ma­chines pos­si­ble but with hu­mans still in con­trol.
Will ma­chines re­place peo­ple?
Never! A ma­chine has no feel­ings. You can make them sim­u­late vi­sion or smell, but they can never ex­pe­ri­ence what peo­ple per­ceive as pas­sion, risk, love, com­pas­sion... We should not be afraid of tech­no­log­i­cal ad­vances, but we should mon­i­tor the mis­use of tech­nol­ogy and be watch­ful and an­tic­i­pate dis­crim­i­na­tion and min­imise them as much as pos­si­ble. Per­son­ally, I am very op­ti­mistic in this re­gard.
And that means…?
That I be­lieve in hu­man­ity.

in­ter­view Tech­nol­ogy

Researcher and teacher

Eugenia Fuenmayor graduated in Computer Science in 1980, in Venezuela. Her case, she explains, was “totally atypical”: “Women made up almost 90% of the university students, but when I went on to do my doctorate, I was the only woman enrolled in the course.” Twenty-two years ago, she moved to Barcelona. Currently, in addition to her position at Eurecat, she teaches an introduction to programming course at Pompeu Fabra University, to a class mostly made up of male students. Yet she points out that at Eurecat, “48% of the more than 700 people who work there are women”, a proportion that decreases slightly in the digital area of the technology centre, although “we are making progress”, she insists.

Sign in. Sign in if you are already a verified reader. I want to become verified reader. To leave comments on the website you must be a verified reader.
Note: To leave comments on the website you must be a verified reader and accept the conditions of use.