A patent is “an exclusive right granted for an invention”, providing protection to the patent owners for a limited time (usually 20 years). The original use of patents was to provide a financial incentive for creating and innovating while preventing other people from ripping off the idea and using it themselves. In theory, this practice makes sense. If you’re going to be an inventor and make super cool things and are the true, first person to make said super cool things, then I believe you should have the right to protect this information and the product from being poached by other people. Considering that patents are filed territorially (that is, if you file for a patent in the US, it only applies in the US unless you file elsewhere), that means you’re pretty much competing against a couple hundred million people for the right to your own intellectual property, assuming you are the first to file a patent for this particular thing. Thus, I think it makes sense the patents exist.
I think patents used in their original purpose are beneficial to society and promote innovation. I think they’re beneficial because they give an incentive for people to build and create since in a society like ours, the only way you’re going to end up on top is by having a ton of money (which is just a sad truth). We’re also a society reliant on advancing technologies, spoiled by everything that we now have easy access to, from kitchen gadgets to a mobile internet. Generally speaking, we only want to continue to advance and grow, coming up with new ideas and rolling out novel devices. That being said, I’m a little iffy on whether I think they’re necessary for society. On the one hand, there are certainly benefits and I think that having them gives more structure to the intellectual world and to innovators everywhere. On the other hand, since the system is easily susceptible to corruption and manipulation, there are some instances in which having patents is more detrimental than beneficial. For example, in the world of software patents, we really need to lay down some clear guidelines on what can and cannot be protected by a patent. I was really frustrated reading the Vox article about the Supreme Court not understanding what software really is. Because of their lack of understanding, the whole software industry suffers. Because they cannot set clear guidelines about what should and should not merit patent privileges, people and companies suffer. They say that “mathematical algorithms can’t be patented”, but software is just a bunch of mathematical algorithms. In theory, anyone with a computer could eventually recreate the software. I don’t think it’s the same as having a physical device that you melded together yourself. I mean, it’s one thing to create software that manipulates a physical machine, but another to have software that exists solely on a computer (or similar thing, like a phone or tablet). So I guess I’m kind of in the boat of patents being restricted to physical, tangible artifacts, at least for the sake of clarity. However, at the same time, I don’t think multiple companies should just be able to outright copy other peoples’ software. For example, if some smaller company built a super cool game, I don’t think it’d be right for a larger company went and took the idea, created a near-identical game, and was able to make way more money than the smaller company just because people have heard of them. In this case, I’d much rather root for David than Goliath. On the other hand, when I read about patent trolls, I was even more frustrated. In this case, I wanted to root for Goliath (software companies) rather than David (patent troll companies). These trolls literally make money by buying up a bunch of patents that they didn’t even originally file for from bankrupt or other companies for cheap. Then they basically just go out and sue everybody for the slightest of copyright infringements. To me, it’s bogus, and I think it totally abuses the system and ultimately harms the software companies who are just trying to innovate.
0 Comments
I think the motivation for developing and building self-driving cars is two-fold. On one hand, innovators are, by their very nature, driven (no pun intended) to produce technology that changes the world; since cars permeate modern society, obviously upgrading or changing it in some way will have a large impact on the way humans live each day. On the other hand, like with the argument of government, it comes down to safety. According to the New York Times article, autonomous driving would reduce the number of people killed in traffic accidents by as much as 40%. With the amount of auto-related deaths increasing in the US, this is an attractive statistic. However, with all great technological advancements comes great responsibility. In theory, self-driving cars could make our roads safer, but the technology is only going to be as good as it is developed. I’ve said this probably a hundred times, but technology is also hackable, so that’s a concern, as well. However, like one of the articles mentioned, high-profile cases give automated driving a bad name, and people tend to remember the fiery deaths in Mountain View as opposed to the hundreds (thousands? I don’t actually know how many people have self-driving cars and consistently use the feature) of other consumers who have used the software and have had no problems.
Like the people surveyed in the Gizmodo article, I would consider myself one who believes in a utilitarian approach to programming self-driving cars but would not want one programmed that way myself. I don’t really think I like the idea of fully-automated self-driving cars. It reminds me of when I wrote a paper for Philosophy of Technology about autopilot on airplanes. They say that the pilots of today are not as good as the pilots of the past because modern day pilots have come to rely on modern technology. In other words, an older, more experienced pilot would be able to spring into action in emergency situations when autopilot fails better than a younger pilot. Along similar lines, I don’t want to become someone who is fully reliant on a car to drive me places; I think some features are useful, for sure, like staying in your line or emergency breaking, but the idea of letting it take control of the wheel without me makes me feel a little nervous. That all being said, regardless of whether I would want a self-driving car or not, this technology is well on its way of development and completion. However, the standards of automated driving, according to the Toyota rep, has not yet been reached and may not for a while. Simultaneously, Tesla already rolls out full self-driving hardware in all of its vehicles. My Aunty recently bought a Tesla, but I believe it was an extra few thousand dollars to have the self-driving hardware activated. She’s in her early 60s, and she decided to forgo that. Not only did it not seem particularly useful to her, but since she’s been driving for so long, I don’t think she really felt comfortable with it. It’s an interesting bit to think about: what are the social, economic, and political ramifications of the mass introduction of self-driving cars? I think people will be wary, and marketers would need to throw a lot of good statistical numbers at consumers, as well as deploy examples of their technology at work, before people are comfortable with the idea of handing over the wheel to an AI. Reading these articles regarding automation is interesting. On the one hand, I want to applaud the strides that the tech community is making in automation and robots doing jobs people typically do because it's really cool; on the other hand, I worry about what this means for our future and what sort of jobs will disappear and what will be created in its place. Though people have been making arguments that technological innovation is a constant throughout recent history that actually creates jobs in the end, I don't know how much I believe that this trend will continue the more we replace people with robots. (I also get weary about the whole idea in general because I've watched every episode of Black Mirror and know tech is hackable.)
Anyway, you're asking a lot when you're wondering about the political, social, and economic implications of replacing human labor with automation on a massive scale. I'm no political/social activist, nor am I an economist, but my gut tells me that the impact is going to be pretty significant, but it may be for the better, or it could be for the worst. I don't think the Luddites were right about technology; to an extent, maybe I agree that technology shouldn't be pervasive through all industries because there are some places where I just want a human to help me out. For example, I'd rather deal with a human doctor than some robot. Robots are way too unemotional (since obviously they are without emotion) and having them deal with personal issues is an uncomfortable idea (at least to me). However, while I believe automation should continue to be advanced and developed, I don't think it should just steamroll over industries and remove people from jobs without thinking of the consequences. I already know the effects of unemployment today. To have a massive displacement of jobs, I think, would be crippling to the US morale and put people in a hard place. To alleviate this stress and pain, something needs to be done before automation can just wipe out millions of jobs. As I said before, I'm not totally comfortable with AI taking over work normally performed by humans. I mean, a cashier is one thing (how many times do I have to repeat my order before the cashier hears it correctly?? Curse my quiet voice), but for them to replace nannies, doctors, therapists, dog sitters, or anything like that where the human connection and relationship is significant would be preposterous. I can't imagine having some robot rocking a baby to sleep. I wouldn't want something so precious to be in the "hands" of something like that. As for life-or-death decisions, perhaps a robot can tell me the statistics and tell me what my odds are for my survival, but hell if I'm gonna let it pull the plug on me when I can still feel myself kicking. I don't like not feeling in control of my life, and having a robot tell me whether or not I'm going to live bugs me out. UBI seems like a cool idea in theory, and I totally agree with Jim Pugh who said that the addition of Basic Income isn't going to make people stop working. Sometimes I think about why I'm even getting a fancy college degree and why I'm killing myself to find a job, but then I remember it's because I want to do cool things like travel and get a dog and spoil it like that's my job, but I can't do that if I don't make money. I also think I'd go nuts if I never had to work my brain. Like, hell yeah I love a day when I laze around and do nothing, but I can do that for maybe two days straight before I feel like I'm dying. So yeah, UBI isn't going to make people quit working; if it's something that can actually be implemented, I think it'd be dope. I hate how much our society relies on making and spending money. We can't get by without it. But if UBI were implemented, then maybe we wouldn't have to worry so much about it. I could see less crime (particularly stealing) because people don't have to worry about not being able to get their next meal (which of course doesn't cover people who steal because they want nice things rather than in a desperate situation, but I digress). Overall, I think automation is a good thing and I'm all for technological development and advancement. However, like Uncle Ben said, "With great power comes great responsibility", and I hope that we carefully consider the ramifications of an automated world. I’m not gonna lie; I live in a pretty closed bubble when it comes to politics. Before this semester, I didn’t really pay attention to the news. When I logged onto social media, the only kind of news I got was about rescued dogs and celebrities. This semester, since I’m taking Ethics and Gender and Pop Culture, it’s been impossible to stay away from the news and the stories of today’s world, and I don’t want to stay away. Hell, I downloaded Twitter just so I can stay up to date with news (Laura laughs at me for this), but someone in my Gender class suggested that it’s a really good place to get news because most of the top stories are fact-checked and are confirmed by many reliable sources. I’ve taken to reading theSkimm, so I get a snippet of what’s been happening each day in my mailbox. I don’t want to not know what’s happening in the world anymore, especially after I learned about Net Neutrality months too late. Ever since Melina and I were the “peer experts” for the Internet Age in our Gender class, Net Neutrality’s been on my mind. I wondered why something so important was never brought up in our Ethics class, especially since the banishment of Net Neutrality is set to occur on April 24th, but I later saw that we were going to be speaking about it this week. I’m excited to hear about some of my classmates POVs, particularly those of more conservative leanings. In my Gender class, the students are overwhelmingly (and unsurprisingly, given the topics we discuss in this class) left-leaning, and everyone wanted Net Neutrality. Given what we know, Net Neutrality is essentially the idea that ISPs (and there are only like, 5 major ISPs) cannot limit or block (or speed up) content that they agree and disagree with. Basically, it allows for the Internet to be free with government regulation. Big companies that take up a lot of broadband, like Netflix, Facebook, and Google, as well as start-ups, and the general public are huge advocates for Net Neutrality. On the other side, we have Trump’s new FCC, the ISPs (AT&T, Verizon, Comcast), and basically any person who agrees with all of the president’s opinions. The cases against Net Neutrality are pretty unconvincing. In the Being Libertarian article, the writer slapped down the “400 pages of new regulations” put in place by the FCC for Net Neutrality but didn’t even bother saying what these pages contained. It felt like some sort of ploy to get people to be scared that it seems so heavily regulated but didn’t even talk about what any of those pages said. At the end of the article I was kind of like, “Ok, so … are they going to talk about their argument against Net Neutrality now?” And I get it, libertarians basically don’t like government regulation, so I guess if the Internet is regulated by the government, it’s bad, right? This is bold, but I’m pretty sure if these same regulations were somehow put into someone else’s hands (or maybe many hands, to make it seem less like being controlled by a single entity), they would no longer have a problem with it. Or maybe if the FCC was actually chosen by the people so they represented the people they are making decisions for, there would be less problem because it’d feel more democratic. But I digress, since the people on the board are selected by the president and confirmed by our red senate. I guess those people were elected officials, but why can’t we also elect representatives for the FCC? Anyway, moving on from that, if it isn’t obvious, I am a proponent for Net Neutrality. Sure, it’s probably not perfect, and I haven’t read the 400 pages of regulation set in place, but it sure beats the idea of letting ISPs run wild and block any websites they want from showing their content to its users, particularly when the ISP map looks like this (with green indicating access to 1 ISP and blue indicating access to 2+): Yeah, so what are we supposed to do if we can’t access our favorite sites? Switch to an ISP that likes them? Ha. Well I guess you better hope that you actually live in an area that lets you do that. The Internet, though provided by these few ISPs, is a public service and fair access should be a basic right. We should be allowed to view whatever we want. I want to get both sides of a news story before forming my own opinion. I don’t want to only know the left-leaning side because I have to pay in order to learn about both sides. I don’t want to be controlled by the ISP I’m forced to have because the competition between them is so abysmal. I think about John Oliver’s Net Neutrality segment and I worry extensively about living in an age where the Internet could no longer promote movements that were started and propagated through the Internet, like BLM and LGBTQ communities. So yeah, I want my Internet to be regulated by the government (“the biggest, most powerful monopoly in the world!!!”) if that’s what it takes to keep these other ISPs (read: other monopolies, just privately-owned) from controlling what I can and cannot see.
From what I understand, Corporate Personhood is pretty bogus. It essentially states that a corporation is entitled to certain (but not all) natural rights that humans are typically attributed, such as the ability to sue and be sued, endorse political candidates (allowing them to funnel millions of dollars into their campaigns), and the right to religious freedom. However, you cannot incarcerate a corporation because it’s not really a person. Like Jon Stewart joked, one way to show a corporation isn’t a human is through its “inability to love”. The ramifications of granting corporations these rights makes it seem like corporations are entities that can do whatever they want without consequence. This is an oversimplification, but a corporation really can’t be arrested; its CEOs and board members can, and the company could go bankrupt, but the corporation itself is not damaged, physically, because it is an entity and not a corporeal being.
After reading into the Muslim Registry case study, I believe tech workers and companies are right to pledge not to actively work on building an immigration database. While it is based in moral and ethical views, I don’t see how having something like this, which literally profiles and makes those targeted individuals feel unsafe, can sit well with anyone. In the grand scheme of things, it’s hard to justify having a database like this when it probably would not even help in the long run. Like with the unsuccessfulness of the NSA and its government surveillance, I don’t see having a registration of US citizens and immigrants of Islamic decent/religious practice would help anything but stoke the raging fire of animosity and stigma against Muslims already burning. Sure, proponents of having a registry would say that they think it makes them feel safer and will justify its existence with this opinion. However, if the tables were turned, they would feel outraged. To have a database based on profiling a person because of their race or religion is a violation of human privacy. To turn that data over to the government and use it to “weed out” the wrongdoers or illegal immigrants feels unfair for those who are just trying to live their life and mind their own business. I think that companies do need to make business decisions with morality and ethics in mind. To make decisions without it could be severely detrimental to the general public, as well as the company itself. I believe it’s very important to consider the triple bottom line of “people, planet, and revenue” when considering business decisions and how to move forward. We cannot make decisions that affect the planet and its residents without first thinking about the ramifications it can have. To put it short, it would be irresponsible to all parties involved. A company also cannot make immoral decisions without hearing back from public outcry and risking a significant flop in how the company is received. Since companies are often caterers to its consumers (because they are the ones who are providing them with the money), they must take into consideration what they may think and how they may react before making decisions. Big data is such a hot topic right now. I enjoy learning about it almost as much as I am creeped out by it! There are so many cool things you can learn by looking and using big data, but there are also a lot of harmful things you can do, as well (but it could also be unintentional, too). I found it really interesting that people who knew their privacy had been invaded (their information was already being manipulated and used in some way) didn't care enough to pay even $1 to keep their information safeguarded, while people who thought their information was protected would be willing to pay. As one person said, people were "resigned" to the fact that their information was out there and being used, and there isn't much to be done about it. Me, personally, I don't really care that they know my googling habits. This somehow feels like a less serious issue than government surveillance, even though there's probably a lot of overlap. However, in the advertisement industry specifically, I don't care that they target ads towards me, mostly because I am not usually swayed by ads (unless they're for cool phone games because I'm always looking for new phone games). In some ways, it does feel a little invasive, and it's sort of creepy how quickly things will pop up. Just today I had some things remaining in my cart for F21, and I opened a Facebook tab and lo and behold, there were the sandals I was just looking at. This actually reminds me of when this happened to me and Erin last year (hopefully Erin doesn't mind that I screenshotted our text convo): She had sent me a link to the Bowflex trying to explain what it is, and two hours later while she was on instagram, she had an ad. Two minutes later, I opened instagram, and I also had the same ad! We laughed about it then and I still think it's pretty funny, but that just goes to show how pervasive this issue really is.
As for a company's responsibilities regarding this information, I do think that they really need to keep it private and use it for their own use. I find some comfort when I see on the Google Ads page that they do not distribute my personal information to anyone else, but then again, maybe the Terms and Conditions said they had the right to lie to me, so who even knows. However, by using their product, I think I sort of give up some of my rights to perfect privacy. I hope that they don't use it for anything bad, but if it's benign like targeted ads, I don't really mind. I don't really want to re-delve into the issue of government surveillance and our personal information on our phones and computers, so I'll stop there. On another note, I will be the first to admit that I actively use the AdBlockPlus Google Chrome extension. I dislike the multitudes of ads, and I like that it blocks ads while I watch shows on Hulu (mind you, I pay for Hulu, so I don't see why I need to further generate revenue for them via their ads). That being said, in a general sense, I do not think that online advertising is typically invasive or intolerable. I deal with the ads I see on Facebook (but do get annoyed when they added the feature that allows ads to play mid-video to force to you to watch them while I'm right in the middle of the cute puppy video I was watching), and I typically don't mind seeing them, so long as they do not pop up and block the screen I'm looking at. That's when I find them a little too invasive. However, if a site says "PSSST we noticed you're using an AdBlocker" and politely ask me to turn it off, I comply. I don't care if they want me to turn it off so long as I can still enjoy the content of what I'm looking at, and if giving them revenue by "looking" at ads is going to keep their business up and running, I'll happily contribute because it has no effect on me. I ignore 90% of the ads I see anyway. As to whether I think it's ethical to use ad blocking tools, I don't think I have an obligation to "pay" for a site that is free, even if I am using their services. If they wanted to force me to pay, then they shouldn't offer their services up for free. As one of the articles had also mentioned, "It's not unethical to do things because other people don't like them". Though this isn't a universal truth, I think that it applies in this case. When I started to read up on the issue of government surveillance, my mind immediately went to a funny Buzzfeed article that I had seen recently. Its title was "18 Jokes About the FBI Spying on People That Will Make You Laugh Then Feel Super Paranoid". A lot of it was about the FBI Agent that is apparently assigned to watch you through your webcam and about what they must think about you watching you go through the motions of your daily life. It's funny, and I laugh it off because I'm pretty sure it's not true (sounds like a pretty significant waste of resources), but the idea of someone constantly monitoring me and everything that I do on the internet sort of unsettles me. I don't have anything to hide, and I don't ever intend to do anything dangerously illegal, but the idea of being constantly looked at is no bueno.
That all being said, how do I feel about government surveillance? I'm on the fence. On the one hand, I think it's important that we have the option to have access to private data of individuals, particularly those involved in shady dealings or terrorist attacks. If invading the privacy of that person means learning about other potential issues that could come up in the future, then I say go for it. However, I understand the ramifications of tech companies purposefully weakening encryption to implement backdoors; even if it was something that could be turned off and on, it still risks the increased potential for a cybersecurity attack from a party that does not have the user's best interest in mind. As for whether companies like Apple are ethically responsible for protecting the privacy of their users, I do think they are to an extent. I don't think that the general population of law-abiding citizens should have their privacy invaded by the government. It's tricky, however, as one article had mentioned, how to define what privacy means since that in itself is somewhat of a social/abstract concept. So, to what extent is Apple obligated to reveal morally unjust people or those who may be participating in illegal/dangerous activities? I don't think that they can simply say "NO" in the name of privacy. I don't think that they could really stomach being part of the reason why something horrible happens, especially if it was preventable by sharing "private" information. In hindsight, it's difficult to turn a blind eye to the fact that sharing information could prevent terrible things, even if you are a strong advocate for privacy. How do you equate the privacy of one malicious individual over the lives of the innocent people who are affected? With that in mind, I suppose I would say I lean more towards government surveillance, but I believe that this surveillance needs surveillance. The government should not be able to just go around and monitor any person; they should need warrants and probable cause for doing so. Even though terrorist attacks are few and far between, I still think having the technology in place to be able to monitor erratic behavior is important and would give people an (ironic) sense of security. I don't know many people who would be against the government invading the privacy of a single individual who knew the exact time and location a bomb would go off in the US. Though it's tough to shirk away from the idea of being watched all the time, I believe the benefits of its potential outweigh the bad. So long as there is very thorough and serious surveillance over this government surveillance, I think that it would be good in the long run. I find it hard to grapple with the technology industry and its meritocratic ways. I do believe that it is perfectly fine, even expected, for a company to want the people who have the highest skill level at the time. However, at the same time, I do think there is a lot that people, who may not necessarily have the skill set but are willing and able to learn it over time, can offer. I think it's also important that we, as Americans, put Americans first. I do not mean that we should exclude people from other nations. My family, like most, is one of recent immigrants. My mom's siblings are immigrants (but she was born in America), and my grandparents are immigrants from the Philippines. In America, they found opportunity, and they paved the way for me to be where I am today.
That being said, I think that it matters to an extent where the technology industry gets its employees. We should service the people that live in our country first before others. I do not, however, believe that we should be rid of the H-1B program. That may seem contradictory, but I think we need to scope out the best of the best from our own nation before we seek those from outside of the US. As to whether companies should be free to hire the best regardless of their national origin, I struggle with this. Are companies doing this because they want the best of the best, or are they doing this because they want the best for a cheaper price? How would companies react if they knew that they needed to pay higher salaries to foreign workers if they decided to go this route? I believe if they were okay with that, then companies should be allowed to freely choose who they want. However, if they want to have workers from anywhere regardless of national origin, I think that money cannot be the deciding factor in why they want this to be the case. I don't know if it should be called a moral/ethical obligation to America, but I do think that it should be a given that companies founded and able to form because they are located in the United States should have some sort of loyalty or patriotic love for their country. If it's possible to, as Trump said, "Buy American, Hire American", I do not think that companies should really be opposed to that, especially if you level the economic playing field with equal pay for all workers at their respective levels regardless of national origin. It's hard to answer whether nations should prioritize their corporations' needs over the needs of their citizens. I think there must be a balance because if you focus too much on one group, the other will be inclined towards restlessness and generally discontent. There needs to be a sort of compromise that allows nations to care about them semi-equally, and I think that reforming and revising how H1-B visas are given out would help alleviate at least a part of this struggle. However, as a disclaimer: I don't know a whole lot about the topic because I tend to stay away from politically polarizing issues, so I can't say that what I think is really educatedly backed or well-formed. I was surprised to read about the Therac-25 accidents, not because I don't believe that a software mistake could kill someone, but because I had never heard about it. It makes sense to learn about it in an ethics class, but I feel like a case study like this should be important to introduce to CS students early on in their studies, just so it's at the back of their mind. They should be aware that what we do and what we create has the potential to have important and significant ramifications to those that use it. It doesn't have to be a big, in-depth thing, but it should at least be mentioned, perhaps, when we start learning about code and start to encounter bugs.
That all being said, Therac-25 was a radiation therapy machine introduced by Atomic Energy of Canada Limited (AECL) in the 1980s following similar units donned with the same name (but different number). The difference between the Therac-25 and its predecessors, however, was that the Therac-25 featured safety measures that were implemented through software rather than hardware. Because of the hardware measures previously in place, AECL overlooked a bug in the original software that was ignored because the hardware took care of the problem the bug created. This would prove to be a deadly mistake. The bug allowed the radiology technician to set the machine incorrectly if they make a mistake in typing at a critical point in the machine's aligning of the magnets. If the magnets were not properly placed, a "Malfunction 54" appeared, which indicated that the machine did not know whether there was going to be an underdose OR overdose of the radiation. This message, however, apparently occurred relatively frequently and did not appear in the manual, so technicians tended to ignore the error when it occurred because it seemed trivial. However, when the magnets were not set and the patient received the radiation, the doses were at hundreds of times greater than normal, resulting in the deaths of at least 3 people and serious injuries of 3 more. One challenge that software developers working safety-critical systems face is a very thorough checking of their code, testing every single possible case, and making sure everything works in the expected way. It is crucial that when code has the potential to kill or seriously injure someone, lose or severely damage the equipment/property, or cause environmental harm that they take great care to make sure that these systems do not fail. I think that they should approach these very carefully knowing the magnitude of impact a mistake or bug can have. When accidents happen, I do not think that it is only the software engineers at fault. There are many different people working on a single project; it will probably almost never be just the software engineers who originally designed the code. One must also lay blame on the people who test and review the code because they are the ones who need to make sure they catch these mistakes. There should be a very structured way of combing through code to prevent mistakes caused by bugs or unknown input. However, because we are only human, it is difficult to catch every single thing. People who are going to be using products that have safety-critical systems should be aware of the potential for problems in the software so they can anticipate and do their own research on things. This way, they can make an educated decision about whether or not they want to go through with it. That all being said, it's hard to prevent accidents (because by their very nature, they are not foreseen or intentional), but if they are things that occur because of something we can control, we must take every action to prevent these things from happening. I don't believe the gender gap is overblown, but I don't think it's necessarily a problem. I believe that part of the reason women are underrepresented is not necessarily because they aren't being hired but because they aren't applying or interested in the field. In particular, it's obvious that even looking at our own CS class, there are far more males than females, and this is not out of the norm for the national average. In particular, the following graph really caught my attention: I remember how hard my high school pushed the STEM fields onto us. I went to an all-girls school, and they really stressed women empowerment and stretching the boundaries of what a woman "could" and "could not" do. I think that this graph really sort of reflects what about the STEM fields they push, however. I had never even thought about coding until I came to college and used MATLAB for EG 101010101001 or whatever. It wasn't something that was really offered at our school. There was an option to take AP Computer Science online, but taking a class online and in person are two very different things. At our actual school, we had AP Calc, Chemistry, Physics, even Financial Algebra, but no Computer Science. It just wasn't really something that they pushed.
I can also honestly say that I don't think I would have ended up being CS if Notre Dame didn't have CS in the College of Engineering. It wasn't something that crossed my mind, and it wouldn't have been introduced to me through that first EG class if it wasn't included. I'm not sure exactly how our percentage of women in CS compares to the average, but I wouldn't be that surprised if we were slightly higher than the norm because of this. That all being said, this gender gap in tech definitely stems (no pun intended) from a lack of interest or introduction to CS. Of course, there are other factors too. I didn't know about it, but sexism is strong in the industry, which is pretty disappointing. I do, however, remember watching CODE: Debugging the Gender Gap my freshman year and being surprised to hear women's testimonies about their experiences being a woman in tech. (Don't ask me to elaborate on that more, though, because I don't remember much beyond that feeling). I think there is a recent focus on diversity because of the politics surrounding today. Race and gender is something is at the forefront of a lot of people's minds, especially because of the man we elected as president. He inspired outrage, with countless Women's Marches across the country, and he made and is continuing to make minorities (particularly African Americans and Hispanics) feel unsafe in the country they call home. Because of this, I think there has been a hyperawareness to many issues regarding race and gender inequality, so the one that exists in the tech industry is being called out for being poorly diversified. It's interesting being an Asian woman in the tech industry. There appears to be far more Asians in the tech industry compared to other minorities, but I'll bet that the gender diversity within that subset of people is pretty poor too. |
AuthorJulianna Yee. Archives
March 2018
Categories |