* better osrelease parsing (ignore garbage at end of string)
* add defaultshelltype to telemetry input
* track reinit errors by shelltype to see if zsh integration is working
* wrote client code for communicating with lambda cloud
* Added timeout functionality, added check for telemetry enabled for clouod completion, added capability to unset token, other small fixes
* removed stale prints and comments, readded non stream completion for now
* changed json encode to json marshal, also testing my new commit author
* added no telemetry error message and removed check for model in cloud completion
* added defer conn.close() to doOpenAIStreamCompletion, so websocket is always closed
* made a constant for the long telemetry error message
* added endpoint getter, made errors better
* updated scripthaus file to include dev ws endpoint
* added error check for open ai errors
* changed bool condition for better readability
* update some error messages (use error message from server if returned)
* dont blow up the whole response if the server times out. just write a timeout message
* render streaming errors with a new prompt in openai.tsx (show content and error). render cmd status 'error' with red x as well. show exitcode in tooltip of 'x'
* set hadError for errors. update timeout error to work with new frontend code
* bump client timeout to 5 minutes (longer than server timeout)
---------
Co-authored-by: sawka