(Part 1 of X as, honestly, I have absolutely no idea of how many parts this series of articles will last, anyway).

If you’re reading this you probably know which company Thrustmaster is, what’s its main business and what are their most known products. Probably you’re a simmer.

If you’re not a simmer, that means a person which is interested in simulation (usually flight simulation), then maybe you don’t know Thrustmaster. This is their website: Thrustmaster.com (US).

Some Thrustmaster products really made history and any good simmer knows them:

  • Pro Flight Control Stick
  • X-Fighter Joystick
  • Rudder Control System
  • Weapons Control System – a programmable throttle controller
  • F-16 TQS and FLCS – full size programmable replicas of the F-16C’s throttle and stick
  • F-22 PRO – a full size programmable replica of the YF-22 stick (almost exactly the same as an F-16C, F-22A’s stick is different)
  • HOTAS Cougar – a renewed F-16′s HOTAS replica controller
  • HOTAS Warthog – a replica of the A-10C’s HOTAS

Back on topic, around year 1998/1999 my father bought me a Thrustmaster Top Gun Platinum joystick. The original “Top Gun” (the joystick, not the movie) was an X-Fighter joystick with simpler gimbals and directly attached potentiometers (from now on pots for short). The Top Gun Platinum added a throttle to the base of the controller with an all-black colour style.

Thrustmaster Top Gun Platinum

The joystick we’re talking about

Top Gun and Paramount logos on the joystick

The joystick model with a big logo of Paramount

And now some (interesting?) technical details:

  • The stick is really similar to (but not exactly a replica of) a B-8 grip. The B-8 was a very widespread grip that was used on many US and NATO aircrafts, like the F-4 “Phantom II”, the A-10A “Thunderbolt II”, the Bell 206 “JetRanger III”, the Aermacchi MB-339 and many others.
  • The stick, as usual for that time was connected via game port.
  • It has 3 axis, four buttons and a four way “china hat” switch, usually used in simulators for changing the player’s Point of View (and therefore sometimes named HAT/POV) while in real airplanes is used for pitch and roll trim.
  • Thrustmaster used a “hack” for connecting the hat switch on this and many other joysticks before USB became the standard connection. Because game ports allowed to connect a total of 4 axis and 4 buttons, and the hat switch usually is implemented with 4 microswitches, there was a shortage of buttons that can be connected. Thrustmaster used an axis line and some resistors to send various resistance values to the game port. An “ad-hoc” driver inside the game or the operating system decoded the resistance values and used it as if 4 different buttons were pressed. The drawback is that only “up”, “down”, “left” or “right” directions were allowed (both mechanically and electrically) as it wasn’t possible to combine two commands simultaneously.

Now the project itself is about completely rewiring the joystick and converting it to USB using a cheap Arduino reprogrammed as a HID device.

Part 2 of X will follow when work will actually start.

Bye

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter


I’m doing some experiments with OpenWrt (http://openwrt.org), in particular, I need to build a custom firmware for a cheap router (a TP-Link WR-841N).
OpenWrt is modular enough to install packages on an already installed image, but when your flash memory size is 4MiB, you want to strip everything unnecessary and add everything you need inside the SquashFS file system.

Building a custom image doesn’t require recompiling anything, there is an ImageBuider package that just create the complete firmware image with a custom build script.

The ImageBuider package has been designed to run on an x86_64 Linux distro.

So I installed Centos 7.0 on Hyper-V on Windows 8.1, everything was working out great except for the screen resolution, that was stuck at 1152×864 (X.org is smart) in the Display Settings in Gnome, and my notebook display is 1366×768.
I would be pretty satisfied at running Linux with a resolution of 1024×768, it’s not that I really need 1366×768 at the moment, but even if 1024×768 is a lower resolution compared to 1152×864, X.org doesn’t allow to select any of the lower VGA, SVGA or XGA resolution.

It’s not that the VM is unusable, but it’s very frustrating dealing with scrollbars even in full-screen mode. The funny part is that Linux already include the Hyper-V integration services since kernel 3.3 or 3.4 or something like that and RHEL 7.0 currently use 3.10 (a giant leap forward from the 2.6.32 kernel of RHEL 6.x), but there was no way X.org recognized the Hyper-V framebuffer.

With the xorg.conf file gone a long time ago, we are in the era of autoconfig, monitor hotplugging, etc.

Microsoft state that the best way of connecting to a VM running in Hyper-V is via RDP, this requires having a stable network connection between the host and the guest OSs and having a RDP service running in the guest OS: pretty easy on Windows, a bit more complicated on Linux where xrdp, an RDP server, works but it’s not an easy solution and always require a stable network connection.
If the Hyper-V server is in the DataCenter, this surely is the best solution, but on a notebook this is a bit an overkill.

Looking at the output of lsmod, the hyperv_fb module is already loaded, so there is no reason for it not to work.

After following various guides with all the sort of commands, like adding a modeline to xrandr (doesn’t work), adding video=1366×768:24 to the kernel boot arguments (doesn’t work), adding resolution=1366×768 always to the kernel boot arguments (needless to say…), I’ve finally found the first alf of the solution in a forum about SUSE.

TL;DR

Adding in GRUB2 the kernel boot argument:

video=hyperv_fb:1366x768

finally allowed me to use the VM in full screen @ 1366×768!

Bye

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter

Fahrenheit 896

posted by Viking
apr 13

Paper burns at 451° F (~233° C). Ray Bradbury decided to title one of his novels after this temperature.

Solder melt at 370° F (~188° C). No one titled a novel after this temperature, and the reason is pretty obvious: it isn’t always true.

The most common solder was, when I was small and Xmas trees were tall (Bee Gees anyone?), the alloy made of tin and lead.
More precicely, the alloy made 60% from tin and 40% from lead.
It was cheap, it was good, it was easy to use for the average electronic use, that is building a circuit from scratch or repair a factory-made device (they used more or less the same alloy).

Now we are tall, and Xmas trees are small (“First of May”, by The Bee Gees) and lead is nowhere to be seen anymore. Not in gasoline, nor in solder alloys used in factory-made devices.
The problem is that when a component (maybe a SMD) is soldered on the ground plane of the circuit board with a ROHS-compliant solder, not even 896° F (~480° C) are able to melt the d**n thing!

I will make d**n sure not to buy any ROHS-compliant solder for the following decades.

Bye

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter


A fast update just to say that the adapter is fully working on the Raspberry Pi running NetBSD 6.99.
Connection parameters are 115200-8-N-1 with flow control OFF (ON by default on PuTTY).
The adapter should work also on Rev. 1 Raspberry Pi B models, but there is no P6 (soft reset) header on that revision.

As standalone serial interface, works flawlessly with my old D-Link DSL-G624T wireless modem router. Being a rather old device, it use a slower 38400 bps connection (38400-8-N-1), but, nevertheless, works pretty well.

At last, just to leave no doubts about the SP3232 IC, as mentioned in this article (http://www.fullmeta.it/?p=379):

Sì, sono proprio quelli

Bye

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter


2012 was the year of the Raspberry Pi. This credit card sized computer has become a huge worldwide success.
Running GNU/Linux or other operating systems is an easy task, it just requires to flash an image on an SD Card, put it in the Raspberry and switch on the power supply.

The Raspberry Pi version B sports two USB 2.0 ports (only one on vers. A), a Fast-Ethernet connection (no network on vers. A), HDMI, Composite Video and stereo audio output.
It seems there’s nothing missing on the connection side. You can just plug a TV/monitor, a keyboard (and a mouse) and you’re ready to use the system.
You can also access it via SSH if you’re using Raspian or another OS that automatically enables the network connection and runs sshd or some telnet server at startup.
But, if you don’t have an available TV/monitor and you can’t connect to the Raspberry via network (because there is no DHCP server on your current network or there are no SSH/telnet servers running on the OS), your last chance is a serial console.

I’ll leave the basics to this simple and short article by Joonas Pihlajamaa: http://codeandlife.com/2012/07/01/raspberry-pi-serial-console-with-max3232cpe/
In a nutshell, the Raspberry Pi does have a serial port and a serial console is usually enabled by default by the OS on it, but there isn’t a standard UART/RS-232 connector. Two pins of the GPIO header must be connected to a level shifter like the Maxim MAX3232 in order to have a fully working RS-232 connection.

While the solution by Joonas Pihlajamaa works pretty well, I decided to make some changes:

  • I wanted an interface circuit with a standard DB-9 male connector. This way I can just change some settings, disable the serial console and use the circuit as a simple serial port for the Raspberry PI.
  • I wanted something like an Arduino’s shield, to just plug over and be ‘solid’ with the Raspberry.
  • I wanted the other GPIO pins to be available for other connections, like displays, I2C devices, RTC modules, etc.
  • I wanted the two pins of the soft-reset headers to be available for use even with the circuit plugged over.
  • I also wanted to use the circuit as a standalone RS-232/3.3V level converter to be able to connect to other embedded systems’ serial consoles (like the serial console found in many routers).
  • In the end, I came out with this solution, made with a MAX3232 compatible IC (the cheaper and more versatile SP3232ECP), some stackable headers, the usual five 100nF capacitors and a DB-9 male connector coming from a scrapped old motherboard.

    Serial Port Circuit mounted over a Raspberry Pi B rel. 2

    The P6 header “repeater” (as I call it) also serves to support the circuit on the side of the DB-9 connector.
    A four pin AUX header is also provided for standalone use, with 3.3V, GND, RX and TX connected.
    24 out of 26 GPIO pins are present on the circuit. Of course GPIO pins 8 and 10, TX and RX, are not available for other connections.

    Serial circuit P6 header detail
    Serial circuit AUX header detail

    I’m currently trying the adapter on the Raspberry and seems to be working well. On the PC I’m using an old Prolific USB-to-Serial adapter with a null-modem cable.

    Bye

    Share this:
    Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter


    For various reasons, I need to use OpenVPN at the university to be able to connect to the internet when I’m connected to a wired connection.
    I don’t like OpenVPN on Windows, primarily because it’s a software created for *nix systems and doesn’t run very well under Windows so it needs a lot of configuration under certain circumstances and so on.
    Nevertheless, OpenVPN works by creating an IPv4 Point-to-Point connection using a /30 subnet between the server and the client so, for instance, if the server, on the Point-to-Point connection, has the address 192.168.2.1, the client will have 192.168.2.2, the subnet itself will be 192.168.2.0 and the broadcast address will be 192.168.2.3.

    If you’re using Oracle VirtualBox or VMWare Player, you can simply configure the network adapter of the virtual machine to manage a NAT themselves. If the host has internet access, guest operating systems will be able to connect via a NAT hidden to (but usually customizable by) the user.

    But what if you’re using Hyper-V? Hyper-V has been designed for datacenter operations on Windows Server, where dedicated physical routers would manage routing, NAT etc.
    This brings a lot of really cool features like directly connect a virtual machine to a FCoE SAN or managing virtual switches and other stuffs, as expected from an enterprise-class hypervisor.

    Supposing that, like me, you’re running Windows 8 / 8.1 with Hyper-V on a laptop (I need it for the Windows Phone 8 emulator) and you’re connecting using some kind of PtP connection, like OpenVPN or a simple PPPoE modem, you need to configure a NAT on your system.
    This despite the fact that you won’t always need it, that will not work for every wireless or wired connection you’re gonna use and that there is a really big problem ahead, but let’s talk about this later.

    Creating a NAT for your virtual machines it’s pretty easy.
    Just open the Hyper-V management console, create a new virtual switch connected to an internal network (call it “Hyper-V NAT” or something like that), then open the Control Panel, open Network Connection and Sharing Center and enable the Internet Connection Sharing for the PtP connection you’re using and select as the “domestic network” the “Hyper-V NAT” adapter.

    By doing this, Windows will enable packet forwarding, will set the IP address of the “Hyper-V NAT” adapter to 192.168.137.1/24 and will enable a DHCP & DNS service on the same adapter.
    Virtual Machines connecting via the “Hyper-V NAT” adapter will automatically get their network configuration and will be able to surf the web (and usually download several hundred MBs of updates on their first run).

    Seem easy, huh? Well, it is. You can also change the switch to which a VM is connected when it’s running, so if you’re moving to a place when your PtP connection is not needed you can simply connect the VM to another virtual switch.

    That’s fine, really fine, until someday you need to share the 3G/4G connection of your Windows Phone 8 with your laptop.
    How does it work? Easy. Your WP8 device turns into a wireless router with a built-in DHCP & DNS service.
    The Wi-Fi adapter IPv4 address of your WP8 device is set to 192.168.137.1/24 and your laptop will get the network configuration automatically by your phone.
    Right?

    NO.

    Your wireless adapter is set as the following:
    IPv4 address: 192.168.137.2 ( or .3, or .42, etc. automatically assigned by DHCP of your Windows Phone)
    Subnet Mask: 255.255.255.0 (or /24, by DHCP)
    Default Gateway: 192.168.137.1 (by DHCP)

    but your “Hyper-V NAT” adapter is set as the following:
    IPv4 address: 192.168.137.1 (automatically set by Windows Internet Connection Sharing service)
    Subnet mask: 255.255.255.0 (or /24, always assigned by Windows ICS service)
    Gateway: none (or 127.0.0.1, but it doesn’t matter).

    That’s not gonna work. What your WP doesn’t know is that it’s telling your laptop to use itself as gateway.

    The easy workaround is to disable the “Hyper-V NAT” adapter when you’re tethering your connection to your laptop, and that works.

    Or, you can choose to solve this problem, by telling Windows ICS to use a different subnet to share the connection.
    Because 192.168.137.0/24 is not really an “exotic” subnet, I decided to use the 172.31.137.0/24 subnet (yes, /24, not that you can select a different netmask anyway).
    To change these values, you need to manually edit the Registry’s values located in Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters.
    Change ScopeAddress, ScopeAddressBackup and StandaloneDhcpAddress accordingly to your needs.

    Try to select a subnet you’re almost sure you’ll never use and you should be fine until IPv4 will be deprecated (HAH!).

    Have fun!

    Bye

    Share this:
    Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter

    gen 27

    Exactly one month ago I received my Acer Iconia W510, because of a partnership between Acer and Microsoft, which I want to thank both one more time.
    The Iconia W510 features a brand new Intel Atom Z2760 “Clover Trail” SoC with 2 GiB RAM and a 32 GB SSD.
    With a 1366×768 10″ multitouch display and a detachable keyboard it’s one of the first platforms where Windows 8 can show its full potential.
    Following a rather new tradition, the Iconia has been named Harrier and has joined my main pool of computers, composed by Hornet ( my laptop ) and Raptor ( my workstation ).

    I started working on x86 system in 1994 and didn’t have any occasion to work on other platforms until 2008 when I got my first, used, UltraSPARCv9 workstation. I still was a Windows user nevertheless and as such I always had x86 ( and x64 ) systems to run the various version of Windows I used during the last 19 years.

    As a result, I was very interested about the new Windows RT operating system for ARM SoCs.
    I had the opportunity to try it and, even with the limitation of not being able to install any desktop application, there is still a desktop, there are still both command prompt and PowerShell that can run with administrative privileges, there are the usual command line utilities like netsh and a lot of other things which make Windows RT a “complete” operating system.
    Not to mention Windows RT comes with Office H&S 2013.

    Windows on x86 hardware nonetheless is another story, especially if you are a Power User like me.
    For instance, this is my home’s wokspace. The W510 fits nicely on the left of Raptor‘s main screen.

    My desk with two PCs and the Iconia W510

    Being able to run the full range of 32 bit applications for Windows in the world is priceless. There are scenarios where the need to install software like PuTTY or OpenVPN, for instance on UNIX or *nix-based workplaces, overcome the capabilities of any Windows RT device.
    I installed Visual Studio on my Iconia last week and now I’m able to do much of the work I already do on my laptop or my workstation. Of course I can’t run the WP8 emulator, but I can still write down some ideas into code anywhere I am ( with the help of Visual Studio’s IntelliSense ).

    One thing that was really unexpected is the battery life. It’s amazing. I can use it for two whole days without the need of charging the two batteries ( one in the unit, one in the detachable keyboard ).
    I was really surprised, considering that my dad’s Intel Atom based netbook, running Windows 7, could at least last 6 to 7 hours, maybe 8 with an aggressive energy-saving policy.
    The idea to put another battery pack in the keyboard was excellent. When using the Iconia with the keyboard, or while using the keyboard as a stand, the internal battery will be depleted last, when there’s no more charge in the keyboard’s battery.

    The screen is large enough to be used for productivity tasks while, having a 16:9 A/R, it’s little less suited for reading fixed A4 documents. On the other end is comfortable enough to read e-books or other contents with a variable layout, better suited for portrait orientation on a 16:9 screen.
    The minimum screen brightness is low enough to not strain your eyes while reading. BTW, if reading during nighttime without any other light source, it’s better to switch to a white on black, or even a grey on black color scheme if the app / website allow this.

    Design’s fairly good, a little scratch-prone IMHO. I would have put a regular USB port on the side of the unit instead of a microUSB one. The keyboard has another USB port so there is a total of two ports.
    A male microUSB to female USB-A dongle is bundled with the device, so this isn’t a big issue, but personally I hate dongles since time of PCMCIA network card ( because there’s some magic around them that make them disappear sooner or later ).

    The embedded NFC and Bluetooth could be a good option to attach a mouse without sacrificing one of the two precious USB ports, while BitLocker can use the integrated TPM module to securely encrypt data.

    The really big drawback of the unit Acer sent me are the only 32 GB of internal storage that leave really little space for documents and personal data once App and other software ( like Visual Studio Express or Office standard ) start being installed.
    There is a microSD slot that accept cards up to 32 GB ( 64 GB cards are unsupported  ), so data, music, pictures, etc. can be stored there.

    I had some stability issues during the first week that were greatly reduced with the following driver updates.
    I haven’t had one since the last driver update of January 13.

    Overall, being my first tablet, I’m pretty satisfied of it. Of course I have different needs from standard users. I wouldn’t have cared if the Iconia would have weighted 1 lbs more or would have been 1/4″ ticker but maybe having a mSATA SSD instead of the one soldered on the mainboard.

    In the end, I think the Acer Iconia W510 is a very good product, because before being a tablet, is a PC.
    That means, when choosing a tablet, that the Iconia ( as well as the other “Clover Trail” based tablets ) has no restrictions on any App’s store or market, can be fully integrated in a business / enterprise environment when running Windows 8 Pro ( like mine ) and can be connected to any device with available drivers for Windows 8 / 7 or Vista.

    Many friends of mine are starting to consider this product a good balance between a high-end netbook and a mid-range tablet. Of course high-end x86 tablets offer more, but with an higher price. Acer itself produces the Iconia W700 which belong to another class of products.
    After a single month some things start to be addictive: this a sign that the product is good!

    Again, many thanks to Microsoft Italy and Acer Italy for this amazing Iconia W510.

    Bye

    Share this:
    Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter


    Area 88, an anime I didn’t knew nothing about until a month ago. This wouldn’t be a bad thing on its own, but it actually is because Area 88 is an anime ( and a manga ) strongly related to aviation: how could I have missed it?

    As the title suggests, there are two anime series: the first one is a serie of three OVA released in the mid ’80s, the second one is a TV serie of 12 episodes that aired in 2004. Both have more or less the same plot because they are both based on the manga with the same title.

    As there are no real spoilers here, you can continue reading without fear.

    Because I like to see things in order, I watched the OVA serie first, followed by the the TV serie less than two weeks after. Always because I like to do things in order, I’ll start talking about the aviation part of the show.
    If you know a little about aviation and air forces, then there is no problem; if you know a lot, there there could be some problems; if you are an aviation enthusiast since 1989 then there would be some more problems.

    Nevertheless, the show will be really enjoyable anyway.

    Even if some errors are less noticeable than others, like the F-15 / Tornado -style pylons on the F-8 “Crusader” ( you do remember about “Crusader”‘s pylons and hardpoints, don’t you? ) or a drop tank mounted under an F-14A “Tomcat” centerline “Sparrow“‘s mountpoint, seeing aircraft like the F-4 “Phantom II” flying without the RIO or the F-14A used as a ground attack aircraft during the Vietnam conflict could bring some worries to the viewer.

    Dogfights are the standard even if there are some ( at least one ) long range actions using the “Sparrow” SARH AAMs.

    The main character, Shin Kazama [ 風間真 ], is probably able to shot down a “Flanker” flying a Sopwith “Camel” with three bullets ( one of which is defective ), even if sometimes he screw things. A lot.

    While the TV serie is almost always action-oriented, the OVA, even if shorter, focuses more on what Shin feels and what happens back in Japan. The story development is better depicted in the OVA than in the TV serie. I’m curious to see what happens in the manga though.

    The animation is of course really different and the aircrafts of the TV serie seem to come straight from Initial-D, and they move in the same way, that is, with the same handling as cars. When flying I mean.
    Visual effects are of course better on the TV serie and some details can’t be depicted on the OVA’s hand-drawed aircrafts. I think they’re on par even if for different reasons.

    More or less the same characters are present in both shows, the most notable exceptions are two mercenary pilot, Kim ( that is absent in the OVA but came from the manga ) and Kitori ( a brand new character for the TV serie, and currently my favourite character ).

    Both shows are fun and I suggest everyone to watch both, starting with the OVA as I’ve done.
    I prefer the OVA over the TV serie because it’s shorter and the story is IMHO better developed both around the main character and his background.

    Bye

    Share this:
    Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter

    Market Driven

    posted by Viking
    giu 27

    All the products names are copyrights or trademarks registered by their own manufacturers.

    Back in the good old days when computers weren’t mean to be used – and were costly enough not to be purchased – by anyone, there weren’t any design or weight issues.
    Desktop computers were rugged and ugly and no one cared, as long as they were powerful enough for their job. Laptop computers were bulky and costly enough to be a professional / enterprise – only choice. Mobile phones were the same, and voice calls were really costly too.

    No one really cared about design until Apple made the first iMac, a PowerPC G3 based computer that looked nice and didn’t seems a computer at all, maybe a small colored TV. With the introduction of the various following models, more and more people started buying Apple hardware. The introduction of the iPod was another successful move, selling millions of units. Then followed the iPhone, the rest is history…

    Apple did a very good job, creating a large user base and a series of product related – and complimentary – with each others. Owning an iPod, an iPhone, an iPad, a MacBook and an iMac is not that uncommon, assuming a person can afford such an expense.
    They ( sort of ) share the same design or style and people continue buying them. Of course competitors started to manufacture similar products with sometimes good, sometimes bad results.

    As I wrote, today a device is meant also to be good to see and show to the others, in a similar way as cars and girlfriends ( or boyfriends ). People want them to be that way, because they buy them, so there’s a market for them: the evolution of the well known Supply & Demand model.
    Of course common people aren’t supposed to be “power users” or “pro users”, they simply want something that works, that keep working without maintenance and that in case of trouble can be sent to a service and repair center to be fixed until it’s so old that repairing it isn’t the best choice anymore.
    The problem is “power users” or “pro users” ( like, for instance, me ) don’t like this way of thinking and are starting to get tired of such products that are not customizable, not upgradable or not fixable.

    Once I tried to open a 5th generation 30GB iPod ( my father bought one, and he always say he’ll not make the same mistake twice ) to replace the dead battery – pretty common after 4 / 5 years – with a new one I found on the net for as little as 10€ ( included shipping from Germany ). After cursing for over an hour trying to open that thing following various tutorials I found on the net, I gave up, but I’m still thinking why on Earth Apple’s engineers / designers didn’t simply put four little torx screws on the rear. Of course I already know the answer: because people don’t like seeing the screws, even if they’re covered by plastic or gum caps, because devices without screws sell better, because the vast majority of people are not expected to replace a battery, they’re expected to replace the whole product with a costly new one.

    Of course, from a “corporate” point of view, no one can blame Apple in any way. They’re absolutely right – no sarcasm here. They sell a lot and that demonstrates that they’re doing the right thing, manufacturing devices that people wants.

    But, considering how many things ( TVs, computers, LCD and CRT monitors, various electronic devices, etc. ) I’ve successfully disassembled, repaired and reassembled with a minimum effort of time and money – and, sometimes, no money at all – from a certain point of view it’s sad to see how any customer is supposed to be so dumb he’s unable to use a screwdriver to replace an hard drive or a RAM module, while from another point of view, alas, almost any customer will never need that capability because he’ll never replace the battery or add RAM to his system or replace the hard drive because, even if as simple as it is, he’s not able to.

    In the meantime, I’ll avoid buying phones without interchangeable batteries, laptops without standard screws or any other device that is, beyond it’s inherent limits, not serviceable, not upgradable nor fixable.
    Question is, how long such devices will be available on the consumer market?

    Bye

    Share this:
    Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter