So, before you get the wrong impression, I’m 40. Last year I enrolled in a master program in IT to further my career. It is a special online master offered by a university near me and geared towards people who are in fulltime employement. Almost everybody is in their 30s or 40s. You actually need to show your employement contract as proof when you apply at the university.

Last semester I took a project management course. We had to find a partner and simulate a project: Basically write a project plan for an IT project, think about what problems could arise and plan how to solve them, describe what roles we’d need for the team etc. Basically do all the paperwork of a project without actually doing the project itself. My partner wrote EVERYTHING with ChatGPT. I kept having the same discussion with him over and over: Write the damn thing yourself. Don’t trust ChatGPT. In the end, we’ll need citations anyway, so it’s faster to write it yourself and insert the citation than to retroactively figure them out for a chapter ChatGPT wrote. He didn’t listen to me, had barely any citation in his part. I wrote my part myself. I got a good grade, he said he got one, too.

This semester turned out to be even more frustrating. I’m taking a database course. SQL and such. There is again a group project. We get access to a database of a fictional company and have to do certain operations on it. We decided in the group that each member will prepare the code by themselves before we get together, compare our homework and decide, what code to use on the actual database. So far whenever I checked the other group members’ code it was way better than mine. A lot of things were incorporated that the script hadn’t taught us at that point. I felt pretty stupid becauss they were obviously way ahead of me - until we had a videocall. One of the other girls shared her screen and was working in our database. Something didn’t work. What did she do? Open a chatgpt tab and let the “AI” fix the code. She had also written a short python script to help fix some errors in the data and yes, of course that turned out to be written by chatgpt.

It’s so frustrating. For me it’s cheating, but a lot of professors see using ChatGPT as using the latest tools at our disposal. I would love to honestly learn how to do these things myself, but the majority of my classmates seem to see that differently.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Here’s a question. I’m gonna preface it with some details. One of the things I used to do for the US Navy was the development of security briefs. To write a brief it’s essentially you pulling information from several sources (some of which might be classified in some way) to provide detail for the purposes of briefing a person or people about mission parameters.

    Collating that data is important and it’s got to be not only correct but also up to date and ready in a timely manner. I’m sure ChatGPT or similar could do that to a degree (minus the bit about it being completely correct).

    There are people sitting in degree programs as we speak who are using ChatGPT or another LLM to take shortcuts in not just learning but doing course work. Those people are in degree programs for counter intelligence degrees and similar. Those people may inadvertently put information into these models that is classified. I would bet it has already happened.

    The same can be said for trade secrets. There’s lots of companies out there building code bases that are considered trade secrets or deal with trade secrets protected info.

    Are you suggesting that they use such tools in the arsenal to make their output faster? What happens when they do that and the results are collected by whatever model they use and put back into the training data?

    Do you admit that there are dangers here that people may not be aware of or even cognizant they may one day work in a field where this could be problematic? I wonder this all the time because people only seem to be thinking about the here and now of how quickly something can be done and not the consequences of doing it quickly or more “efficiently” using an LLM and I wonder why people don’t think about it the other way around.

    • 0x01@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      24 hours ago

      I am not an expert in your field, so you’ll know better about the domain specific ramifications of using llms for the tasks you’re asking about.

      That said, one of the pieces of my post that I do think is relevant and important for both your domain and others is the idempotency and privacy of local models.

      Idempotent implies that the model is not liquid (changing weights from one input to the next), and that the entropy is wranglable.

      Local models are by their very nature not sending your data somewhere, rather they are running your input through your gpu, similar to many other programs on your computer. That needs to be qualified with: any non airgapped computer’s information is likely to be leaked at some point in its lifetime so adding classified information to any system is foolish and short sighted.

      If you use chatgpt for collating private, especially classified information, openai have explicitly stated that they use chatgpt prompts for further training so yes absolutely that information will leak not only into future models but also it must be expected to be leaked in such a way that it would be traceable to you personally.

      To summarize, using local llms is slightly better for tasks like the ones you’re asking about, and while the information won’t be shared with any ai company that does not guarantee safety from traditional snooping. Using remote commercial llms though? Absolutely your fears are justified and anyone using commercial systems like chatgpt inputting classified information will absolutely both leak that information and taint future models with the info. That taint isn’t even limited to just the one company/model, the act of distillation means other derivative models will also have that privileged information.

      TLDR; yes, but less so for local ai models.